EDK : FSL macros defined by Xilinx are wrong

On 09/15/2011 12:52 PM, Andy wrote:

Using pure Sn lead finishes in standard Pb soldering profiles is a no-
no.

You need to run your profile at the higher, Pb-free process
temperatures. Make sure all of your materials (board, other
components, flux, paste, etc.) can handle the higher temperatures. If
you do not get up to the Pb-free process temperatures, the Sn finish
is not annealed properly, and thus stresses between plating and base
metal are not relieved properly, thus causing an increase in Sn-
whisker growth rate.
Xilinx has knowledge base articles where they specifically say that
you can safely use their lead-free parts in standard Sn/Pb solder
processes without change to the process. Or, at least that is how I
read what they said there.

Jon
 
On Sep 15, 11:59 am, Jon Elson <el...@pico-systems.com> wrote:
Nico Coesel wrote:

My guess is that you'll need to look at the temperature profile of the
soldering process. I'd get some lead-free soldering experts to look at
the problem.

My feeling on this is that the whiskers have been growing over the
6 months of storage, and that whisker growth is not possible during
the reflow.  I believe all the other parts on the board are ALSO lead-free,
and the whiskers are ONLY showing up on this ONE part.  And, we use
other Xilinx parts where it is NOT showing up.  ONLY on the 100-lead
QFP, but not on 44- or 144-lead parts.

Searching the literature, I have NOT found anyone who says temperature
profile has ANY effect on whisker growth.  Alloys, stresses in the tin
plating, thickness of the tin plating, purity (or lack of) in the Tin,
storage conditions (humidity and thermal cycling) have all been implicated
in affecting the rate or prevalence of the whisker growth.  But, I
have never seen a paper that mentions the reflow temp profile.  If you
have a reference, I'd like to read it.

Thanks,

Jon
Using pure Sn lead finishes in standard Pb soldering profiles is a no-
no.

You need to run your profile at the higher, Pb-free process
temperatures. Make sure all of your materials (board, other
components, flux, paste, etc.) can handle the higher temperatures. If
you do not get up to the Pb-free process temperatures, the Sn finish
is not annealed properly, and thus stresses between plating and base
metal are not relieved properly, thus causing an increase in Sn-
whisker growth rate.

Andy
 
On Sep 14, 11:02 pm, Mark Thorson <nos...@sonic.net> wrote:
Quadibloc wrote:

So, just as larger caches are the present-day form of memory on the
chip, coarse-grained configurability will be the way to increase
yields, if not the way to progress to that old idea of wafer-scale
integration. (That was, of course, back in the days of three-inch
wafers. Fitting an eight-inch wafer into a convenient consumer
package, let alone dealing with its heat dissipation, hardly bears
thinking about.)

Oh, sure it does.  Just have four of them on the top
of the box, put it in the kitchen, and call it a stove.
Ah. Do your power gaming while waiting for supper to be ready. But
silicon carbide has too many defects to run chips at that temperature
yet...

John Savard
 
In article <j4r508$rgl$1@speranza.aioe.org>, gah@ugcs.caltech.edu (glen
herrmannsfeldt) wrote:

Very few problems divide up that way. For those that do, static
reconfiguration is usually the best choice. Dynamic reconfiguration
is fun, but most often doesn't seem to work well with real problems.
Green Array are already selling processor arrays with the biggest being
144 processors. Each processor has it's own on chip memory and fast
communication channels with the others. They can be configured
individually. Supplied language is Color Forth. I do not know anything
more about this, just what I picked up reading comp.language.forth, but
evaluation boards are available.

Ken Young
 
On Sep 14, 2:06 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
6.  A new language with APL-like semantics would allow programmers to
state their wishes at a high enough level for compilers to determine
the low-level method of execution that best matches the particular
hardware that is available to execute it.

APL hasn't been popular over the years, and it could have done
most of this for a long time.  On the other hand, you might look
at the ZPL language.  Not as high-level, but maybe more practical.
ACM killed off SIGAPL about 5-6 years ago. Sorry to see it go.

Have a look at the DARPA HPCS languages, notably, Chapel, Fortress and
X10. Not entirely sure about their respective statuses, but they were
an attempt in the HPC arena to raise the level of abstraction.


-scooter
 
<nmm1@cam.ac.uk> wrote:
+---------------
| For example, there are people starting to think about genuinely
| unreliable computation, of the sort where you just have to live
| with ALL parths being unreliable. After all, we all use such a
| computer every day ....
+---------------

Yes, there are such people, those in the Computational Complexity
branch of Theoretical Computer Science who are working on bounded-error
probabilistic classes, both classical & in quantum computing:

http://en.wikipedia.org/wiki/Bounded-error_probabilistic_polynomial
Bounded-error probabilistic polynomial
In computational complexity theory, bounded-error probabilistic
polynomial time (BPP) is the class of decision problems solvable
by a probabilistic Turing machine in polynomial time, with an error
probability of at most 1/3 for all instances.
...

http://en.wikipedia.org/wiki/BQP
BQP
In computational complexity theory BQP (bounded error quantum
polynomial time) is the class of decision problems solvable by
a quantum computer in polynomial time, with an error probability
of at most 1/3 for all instances. It is the quantum analogue of
the complexity class BPP.
...

Though the math seems to be way ahead of the hardware currently... ;-}


-Rob

-----
Rob Warnock <rpw3@rpw3.org>
627 26th Avenue <http://rpw3.org/>
San Mateo, CA 94403
 
In article <270b4f6b-a8e8-4af0-bf4a-c36da1864692@u19g2000vbm.googlegroups.com>,
Scott Michel <scooter.phd@gmail.com> wrote:
On Sep 14, 2:06 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
6. A new language with APL-like semantics would allow programmers to
state their wishes at a high enough level for compilers to determine
the low-level method of execution that best matches the particular
hardware that is available to execute it.

APL hasn't been popular over the years, and it could have done
most of this for a long time. On the other hand, you might look
at the ZPL language. Not as high-level, but maybe more practical.

ACM killed off SIGAPL about 5-6 years ago. Sorry to see it go.

Have a look at the DARPA HPCS languages, notably, Chapel, Fortress and
X10. Not entirely sure about their respective statuses, but they were
an attempt in the HPC arena to raise the level of abstraction.
No, they aren't. Sorry. They raise the level above that of the
mainstream 1970s and 1980s languages, but not above that of later
ones such as, say, modern Fortran.

What they do do is to try to raise the level of the abstraction
of the parallelism. I found Chapel and Fortress a bit unambitious,
and wasn't convinced that any of them were likely to be genuinely
and widely useful. However, I mean to take another look when (!)
I get time. May I also draw your attention to BSP?

Despite a lot of effort over the years, nobody has ever thought of
a good way of abstracting parallelism in programming languages.
All viable ones have chosen a single model and stuck with it,
though sometimes (as with MPI) one programming model can be easily
and efficiently implemented on another as well.


Regards,
Nick Maclaren.
 
In article <rqSdnb7HzcZh5OnTnZ2dnUVZ_umdnZ2d@speakeasy.net>,
Rob Warnock <rpw3@rpw3.org> wrote:
+---------------
| For example, there are people starting to think about genuinely
| unreliable computation, of the sort where you just have to live
| with ALL parths being unreliable. After all, we all use such a
| computer every day ....
+---------------

Yes, there are such people, those in the Computational Complexity
branch of Theoretical Computer Science who are working on bounded-error
probabilistic classes, both classical & in quantum computing:
Please don't associate what I say with the auto-eroticism of those
lunatics. While there may be some that are better than that, I have
seen little evidence of it in their papers.

The work that I am referring to almost entirely either predates
computer scientists or is being done a long way away from that area.

http://en.wikipedia.org/wiki/Bounded-error_probabilistic_polynomial
Bounded-error probabilistic polynomial
In computational complexity theory, bounded-error probabilistic
polynomial time (BPP) is the class of decision problems solvable
by a probabilistic Turing machine in polynomial time, with an error
probability of at most 1/3 for all instances.
The fundamental mathematical defects of that formulation are left as
an exercise for the reader. Hint: if you are a decent mathematical
probabilist, they will jump out at you.

http://en.wikipedia.org/wiki/BQP
BQP
In computational complexity theory BQP (bounded error quantum
polynomial time) is the class of decision problems solvable by
a quantum computer in polynomial time, with an error probability
of at most 1/3 for all instances. It is the quantum analogue of
the complexity class BPP.

Though the math seems to be way ahead of the hardware currently... ;-}
And the mathematics is itself singularly unimpressive.


Regards,
Nick Maclaren.
 
On 9/17/11 1:44 AM, nmm1@cam.ac.uk wrote:
In article<270b4f6b-a8e8-4af0-bf4a-c36da1864692@u19g2000vbm.googlegroups.com>,
Despite a lot of effort over the years, nobody has ever thought of
a good way of abstracting parallelism in programming languages.
CSP?
 
On 9/17/11 10:55 AM, nmm1@cam.ac.uk wrote:
In article<4E74E439.4000107@bitblocks.com>,
Bakul Shah<usenet@bitblocks.com> wrote:

Despite a lot of effort over the years, nobody has ever thought of
a good way of abstracting parallelism in programming languages.

CSP?

That is a model for describing parallelism of the message-passing
variety (including the use of Von Neumann shared data), and is in
no reasonable sense an abstraction for use in programming languages.
I have not seen anything as elegant as CSP & Dijkstra's
Guarded commands and they have been around for 35+ years.

But perhaps we mean different things? I am talking about
naturally parallel problems. Here is an example (the first
such problem I was given in an OS class ages ago): S students,
each has to read B books in any order, the school library has
C copies of the ith book. Model this with S student
processes and a librarian process! As you can see this is
an allegory of a resource allocation problem.

It is easy to see how to parallelize an APL expression like
"F/(V1 G V2)", where scalar functions F & G take two args.
[In Scheme: (vector-fold F (vector-map G V1 V2))]. You'd have
to know the properties of F & G to do it right but potentially
this can be compiled to run on N parallel cores and these N
pieces will have to use message passing. I would like to be
able to express such decomposition in the language itself.

So you will have to elaborate why and how CSP is not a
reasonable abstraction for parallelism. Erlang, Occam & Go use
it! Go's channels and `goroutines' are easy to use.
 
In article <4E74E439.4000107@bitblocks.com>,
Bakul Shah <usenet@bitblocks.com> wrote:
Despite a lot of effort over the years, nobody has ever thought of
a good way of abstracting parallelism in programming languages.

CSP?
That is a model for describing parallelism of the message-passing
variety (including the use of Von Neumann shared data), and is in
no reasonable sense an abstraction for use in programming languages.

BSP is. Unfortunately, it is not a good one, though I teach and
recommend that people consider it :-(


Regards,
Nick Maclaren.
 
In article <4E74F69C.5080009@bitblocks.com>,
Bakul Shah <usenet@bitblocks.com> wrote:
Despite a lot of effort over the years, nobody has ever thought of
a good way of abstracting parallelism in programming languages.

CSP?

That is a model for describing parallelism of the message-passing
variety (including the use of Von Neumann shared data), and is in
no reasonable sense an abstraction for use in programming languages.

I have not seen anything as elegant as CSP & Dijkstra's
Guarded commands and they have been around for 35+ years.
Well, measure theory is also extremely elegant, and has been around
for longer, but is not a usable abstraction for programming.

But perhaps we mean different things? I am talking about
naturally parallel problems. Here is an example (the first
such problem I was given in an OS class ages ago): S students,
each has to read B books in any order, the school library has
C copies of the ith book. Model this with S student
processes and a librarian process! As you can see this is
an allegory of a resource allocation problem.

Such problems are almost never interesting in practice, and very
often not in theory. Programming is about mapping a mathematical
abstraction of an actual problem into an operational description
for a particular agent.

Perhaps the oldest and best established abstraction for programming
languages is procedures, but array (SIMD) notation and operations
are also ancient, and are inherently parallel. However, 50 years
of experience demonstrates that they are good only for some kinds
of problem and types of agent.


Regards,
Nick Maclaren.
 
Jon Elson wrote:

Hmmm, one additional tidbit. Some boards reflowed at the
same time have been stored in a lab environment. These boards
in question were stored in my basement for six months. The lab env. boards
show no sign of the whiskers. Conditions in my basement are
not bad at all, but it is likely more humid down there than
in the lab. So, I guess this means don't store lead-free
boards in humid conditions.

Jon
 
Jon Elson <elson@pico-systems.com> wrote:

Jon Elson wrote:

Hmmm, one additional tidbit. Some boards reflowed at the
same time have been stored in a lab environment. These boards
in question were stored in my basement for six months. The lab env. boards
show no sign of the whiskers. Conditions in my basement are
not bad at all, but it is likely more humid down there than
in the lab. So, I guess this means don't store lead-free
boards in humid conditions.
IMHO this is the wrong solution. Actually it is not a solution at all.
You really should get in touch with someone who has experience in this
field in order to solve the problem at the root.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------
 
On Sat, 17 Sep 2011 09:44:35 +0100, nmm1 wrote:

Despite a lot of effort over the years, nobody has ever thought of a
good way of abstracting parallelism in programming languages.
That's not really all that surprising though, is it? Hardware that
exhibits programmable parallelism has taken many different forms over the
years, especially with many different scales of granularity of the
parallelisable sequential operations and inter-processor communications,
The entire issue of parallelism is essentially orthogonal to the
sequential Turing/von-Neuman model of computation that is at the heart of
most programming languages. It's not obvious (to me) that a single
language could reasonably describe a problem and have it map efficiently
across "classical" cross-bar shared memory systems (including barrel
processors), NuMA shared memory, distributed shared memory, clusters, and
clouds (the latter just an example of the dynamic resource count vs known-
at-compile-time axis) all of which incorporate both sequential and vector
(and GPU-style) resources.

Which is not to say that such a thing can't exist. My expectation is
that it will wind up being something very functional in shape that
relaxes as many restrictions on order-of-execution as possible (including
order of argument evaluation), sitting on top of a dynamic execution
environment that can compile and re-compile code and shift it around in
the system to match the data that is observed at run-time.

That is: the language can't assume a Turing model, but rather a more
mathematical or declarative one. The compiler has to choose where
sequential execution can be applied, and where that isn't appropriate.

Needless to say, we're not there yet, but I expect to see it in the next
dozen or so years.

Cheers,

--
Andrew
 
On 9/18/11 12:38 AM, nmm1@cam.ac.uk wrote:
In article<4E74F69C.5080009@bitblocks.com>,
Bakul Shah<usenet@bitblocks.com> wrote:

I have not seen anything as elegant as CSP& Dijkstra's
Guarded commands and they have been around for 35+ years.

Well, measure theory is also extremely elegant, and has been around
for longer, but is not a usable abstraction for programming.
Your original statement was
Despite a lot of effort over the years, nobody has ever thought of
a good way of abstracting parallelism in programming languages.
I gave some counter examples but instead of responding to that,
your bring in some random assertion. If you'd used Erlang or Go and
had actual criticisms that would at least make this discussion
interesting. Ah well.
 
Nico Coesel wrote:


IMHO this is the wrong solution. Actually it is not a solution at all.
You really should get in touch with someone who has experience in this
field in order to solve the problem at the root.

You have to understand this is a REALLY small business. I have an
old Philips pick & place machine in my basement, and reflow the boards
in a toaster oven, with a thermocouple reading temperature of the boards.
I can't afford to have a $3000 a day consultant come in, and they'd just
laugh when they saw my equipment.

I could go to an all lead-free process, but these boards have already been
made with plain FR-4 and tin-lead finish. As for getting tin/lead parts,
that is really difficult for a number of the components.

And, I STILL don't know why this ONE specific part is the ONLY one to show
this problem. I use a bunch of other parts from Xilinx with no whiskers,
as well as from a dozen other manufacturers.

Jon
 
In comp.arch.fpga Andrew Reilly <areilly---@bigpond.net.au> wrote:
On Sat, 17 Sep 2011 09:44:35 +0100, nmm1 wrote:

Despite a lot of effort over the years, nobody has ever thought of a
good way of abstracting parallelism in programming languages.

That's not really all that surprising though, is it? Hardware that
exhibits programmable parallelism has taken many different forms over the
years, especially with many different scales of granularity of the
parallelisable sequential operations and inter-processor communications,
Yes, but programs tend to follow the mathematics of matrix algebra.

A language that allowed for parallel processing of matrix operations,
independent of the underlying hardware, should help.

Note that both the PL/I and Fortran array assignment complicate
parallel processing. In the case of overlap, where elements changed
in the destination can later be used in the source, PL/I requires
that the new value be used (as if processed sequentially), where
Fortran requires that the old value be used (a temporary array may
be needed). The Fortran FORALL conveniently doesn't help much.

A construct that allowed the compiler (and parallel processor) to
do the operations in any order, including a promise that no aliasing
occurs, and that no destination array elements are used in the source,
would, it seems to me, help.

Maybe even an assignment construct that allowed for a group of
assignments (presumably array assignments) to be executed, allowing
the compiler to do them in any order, again guaranteeing no aliasing
and no element reuse.

The entire issue of parallelism is essentially orthogonal to the
sequential Turing/von-Neuman model of computation that is at the heart of
most programming languages. It's not obvious (to me) that a single
language could reasonably describe a problem and have it map efficiently
across "classical" cross-bar shared memory systems (including barrel
processors), NuMA shared memory, distributed shared memory, clusters, and
clouds (the latter just an example of the dynamic resource count vs known-
at-compile-time axis) all of which incorporate both sequential and vector
(and GPU-style) resources.
Well, part of it is that we aren't so good at thinking of problems
that way. Us (people) like to think things through one step at a
time, and von-Neumann allows for that.

Which is not to say that such a thing can't exist. My expectation is
that it will wind up being something very functional in shape that
relaxes as many restrictions on order-of-execution as possible (including
order of argument evaluation), sitting on top of a dynamic execution
environment that can compile and re-compile code and shift it around in
the system to match the data that is observed at run-time.

That is: the language can't assume a Turing model, but rather a more
mathematical or declarative one. The compiler has to choose where
sequential execution can be applied, and where that isn't appropriate.

Needless to say, we're not there yet, but I expect to see it in the next
dozen or so years.
In nuclear physics there is a constant describing the number of years
until viable nuclear fusion power plants can be built. It is a
constant in that it seems to always be (about) that many years in the
future. (I believe it is about 20 or 30 years, but I can't find a
reference.)

I wonder if this dozen years is also a constant. People have been
working on parallel programming for years, yet usable programming
languages are always in the future.

-- glen
 
On 9/18/2011 9:03 PM, glen herrmannsfeldt wrote:

In comp.arch.fpga Andrew Reilly<areilly---@bigpond.net.au> wrote:
On Sat, 17 Sep 2011 09:44:35 +0100, nmm1 wrote:

Despite a lot of effort over the years, nobody has ever thought of a
good way of abstracting parallelism in programming languages.

That's not really all that surprising though, is it? Hardware that
exhibits programmable parallelism has taken many different forms over the
years, especially with many different scales of granularity of the
parallelisable sequential operations and inter-processor communications,

Yes, but programs tend to follow the mathematics of matrix algebra.
Spoken like someone who would know the difference between covariant and
contravariant and wouldn't blink at a Christoffel symbol.

This is the "crystalline" memory structure that has so obsessed me. All
of the most powerful mathematical disciplines would at one time have fit
pretty well into this paradigm.

As Andy Glew commented, after talking to some CFD people, maybe the most
natural structure is not objects like vectors and tensors, but something
far more general. Trees (graphs) are important, and they can express a
much more general class of objects than mutlidimensional arrays. The
generality has an enormous price, of course

<snip>

I wonder if this dozen years is also a constant. People have been
working on parallel programming for years, yet usable programming
languages are always in the future.

At least one and possibly more generations will have to die off. At one
time, science and technology progressed slowly enough that the tenure of
senior scientists and engineers was not an obvious obstacle to progress.
Now it is.

Robert.
 
On Sun, 18 Sep 2011 18:26:45 -0700, Bakul Shah wrote:

On 9/18/11 12:38 AM, nmm1@cam.ac.uk wrote:
In article<4E74F69C.5080009@bitblocks.com>,
Bakul Shah<usenet@bitblocks.com> wrote:

I have not seen anything as elegant as CSP& Dijkstra's Guarded
commands and they have been around for 35+ years.

Well, measure theory is also extremely elegant, and has been around for
longer, but is not a usable abstraction for programming.

Your original statement was
Despite a lot of effort over the years, nobody has ever thought of a
good way of abstracting parallelism in programming languages.

I gave some counter examples but instead of responding to that, your
bring in some random assertion. If you'd used Erlang or Go and had
actual criticisms that would at least make this discussion interesting.
Ah well.
I've read the language descriptions of Erlang and Go and think that both
are heading in the right direction, in terms of practical coarse-grain
parallelism, but I doubt that there is a compiler (for any language) that
can turn, say, a large GEMM or FFT problem expressed entirely as
independent agents or go-routines (or futures) into cache-aware vector
code that runs nicely on a small-ish number of cores, if that's what you
happen to have available. It isn't really a question of language at all:
as you say, erlang, go and a few others already have quite reasonable
syntaxes for independent operation. The problem is one of compilation
competence: the ability to decide/adapt/guess vast collections of
nominally independent operations into efficient arbitrarily sequential
operations, rather than putting each potentially-parallel operation into
its own thread and letting the operating system's scheduler muddle
through it at run-time.

Cheers,

--
Andrew
 

Welcome to EDABoard.com

Sponsor

Back
Top