EDK : FSL macros defined by Xilinx are wrong

On 17 Apr 2006 22:03:58 -0700, "Andrew FPGA"
<andrew.newsgroup@gmail.com> wrote:

Hi,
I have been unable to find any info/guidelines for PCB placement of the
DCI reference resistors. I.e. the resistors that attach to VRN and VRP.
My instinct says decoupling capacitors highest priority (closest to
FPGA package), DCI resistors next priority, and everything else lowest
priority.

How senstive to noise are the VRP/VRN inputs?

FPGA is XC3S200-4FT256 and I'm using 49R9 DCI reference resistors.

Regards
Andrew
Great resource for decoupling cap placement and component priorities:

http://www.xilinx.com/bvdocs/appnotes/xapp623.pdf

According to this power distribution app note, page 12, placement of
termination resistors takes precedence over decoupling caps. The DCI
reference resistors function like termination resistors.

I have designed four different working boards using Virtex II Pro
and/or Virtex 4 chips, and always put the DCI resistors closest to the
FPGA on the ball layer. I can usually squeeze in a bunch of 0.01uF
0402 caps in the first ring around the FPGA perimeter on the ball
layer. More 0.01uF caps on the bottom layer. Next ring has 0.1uF
caps on ball and bottom layers, etc.

I always use side vias to power and gnd planes on my decoupling caps
(Page 6) to minimize inductance. Never have room for two vias per cap
lead (Fig 6D).

Do a search on "DCI" in the Xilinx Answers database for more info.

Hope this is helpful.
 
All,

Placement close to the chip is preferred, but since the ratio is 1:1, 1:
1:1/2 or 1:2 (ie a 50 ohm resistor can be used for a 25 ohm drive
strength, or a split 100 ohm termination for 50 ohms), the impedance is
low enough that coupling to the pin and causing it to not terminate
correctly is unlikely.

There are DCI resistors for other companies chips which use a 10:1
ratio, so a 500 ohm resistor is used for a 50 ohm termination. One must
be much more concerned with those resistors (as they are 10 times easier
to couple to).

I would not put the resistor ahead of where a bypass capacitor would go,
(bypassing is more important IMHO), but I would place it within 2 to 3"
of the device, and I would not place strongly switching traces
immediately adjacent to that 2 to 3" trace! I typically use a 3X
spacing rule for reference resistors (and reference voltage pins), so
that the trace to these pins has 3 times the normal spacing between
traces to them. That takes the cross coupling to 1/9 a normally spaced
pair of traces, which should be sufficient (but you should check anyway!).

Austin

mr_dsp@myrealbox.com wrote:

On 17 Apr 2006 22:03:58 -0700, "Andrew FPGA"
andrew.newsgroup@gmail.com> wrote:


Hi,
I have been unable to find any info/guidelines for PCB placement of the
DCI reference resistors. I.e. the resistors that attach to VRN and VRP.
My instinct says decoupling capacitors highest priority (closest to
FPGA package), DCI resistors next priority, and everything else lowest
priority.

How senstive to noise are the VRP/VRN inputs?

FPGA is XC3S200-4FT256 and I'm using 49R9 DCI reference resistors.

Regards
Andrew


Great resource for decoupling cap placement and component priorities:

http://www.xilinx.com/bvdocs/appnotes/xapp623.pdf

According to this power distribution app note, page 12, placement of
termination resistors takes precedence over decoupling caps. The DCI
reference resistors function like termination resistors.

I have designed four different working boards using Virtex II Pro
and/or Virtex 4 chips, and always put the DCI resistors closest to the
FPGA on the ball layer. I can usually squeeze in a bunch of 0.01uF
0402 caps in the first ring around the FPGA perimeter on the ball
layer. More 0.01uF caps on the bottom layer. Next ring has 0.1uF
caps on ball and bottom layers, etc.

I always use side vias to power and gnd planes on my decoupling caps
(Page 6) to minimize inductance. Never have room for two vias per cap
lead (Fig 6D).

Do a search on "DCI" in the Xilinx Answers database for more info.

Hope this is helpful.
 
Hi -

On Fri, 21 Apr 2006 11:00:08 -0700, mr_dsp@myrealbox.com wrote:

On 17 Apr 2006 22:03:58 -0700, "Andrew FPGA"
andrew.newsgroup@gmail.com> wrote:

Hi,
I have been unable to find any info/guidelines for PCB placement of the
DCI reference resistors. I.e. the resistors that attach to VRN and VRP.
My instinct says decoupling capacitors highest priority (closest to
FPGA package), DCI resistors next priority, and everything else lowest
priority.

How senstive to noise are the VRP/VRN inputs?

FPGA is XC3S200-4FT256 and I'm using 49R9 DCI reference resistors.

Regards
Andrew

Great resource for decoupling cap placement and component priorities:

http://www.xilinx.com/bvdocs/appnotes/xapp623.pdf

According to this power distribution app note, page 12, placement of
termination resistors takes precedence over decoupling caps. The DCI
reference resistors function like termination resistors.
I don't think that this is the case. These resistors are not
terminating a trace in any conventional sense; instead, they're acting
as half of a resistive divider, the other half consisting of the
N-channel or P-channel transistors inside the FPGA. The DCI cal
circuit is looking at the voltage level on the DCI reference pin when
the cal driver is on, to get an idea of whether the driver is higher
or lower than the required impedance. It adjusts the driver impedance
up or down based on the voltage it sees.

I certainly wouldn't put these resistors too far away; you don't want
noise coupling into the traces when the FPGA is trying to make a
calibration measurement. But the distance shouldn't be
super-critical, either.

Bob Perlman
Cambrian Design Works
 
Can anyone @Xilinx can confirm they read this and will take
care of it ?


Sylvain Munaut wrote:
Hi everyone,

I hope someone @Xilinx will read this.

In the new EDK 8.1 the FSL access macros have changed
name. And they also introduced _interruptible versions.
Theses are defined in
${EDK_HOME}/sw/lib/bsp/standalone_v1_00_a/src/microblaze/mb_interface.h

The definitions for getfsl_interruptible and
cgetfsl_interruptible are correct. But the ones for
putfsl_interruptible and cputfsl_interruptible are
incorrect. For example putfsl_interruptible is :

#define putfsl_interruptible(val, id) \
asm volatile ("\n1:\n\tnput\t%0,rfsl" #id "\n\t" \
"addic\tr18,r0,0\n\t" \
"bnei\tr18,1b\n" \
: "=d" (val) :: "r18")

and it should be :

#define putfsl_interruptible(val, id) \
asm volatile ("\n1:\n\tnput\t%0,rfsl" #id "\n\t" \
"addic\tr18,r0,0\n\t" \
"bnei\tr18,1b\n" \
:: "d" (val) : "r18")

Obviously val is a input in the case of a 'put' and not
an output.


Another related question : In my code, when a replace all
non _interruptible versions by their _interruptible counter
parts, it doesn't behave as excpected anymore ...
Does theses version require some hw support ?



Sylvain



PS: I know I should submit a webcase but when I try to login
I just get "Server Error" ... and so I obviously can't even
submit a webcase about my problem of being unable to log in
into the webcase ...
 
Sylvain,

I got it.

I will find out what happened, and report back.

Thanks,

Austin

Sylvain Munaut wrote:

Can anyone @Xilinx can confirm they read this and will take
care of it ?


Sylvain Munaut wrote:

Hi everyone,

I hope someone @Xilinx will read this.

In the new EDK 8.1 the FSL access macros have changed
name. And they also introduced _interruptible versions.
Theses are defined in
${EDK_HOME}/sw/lib/bsp/standalone_v1_00_a/src/microblaze/mb_interface.h

The definitions for getfsl_interruptible and
cgetfsl_interruptible are correct. But the ones for
putfsl_interruptible and cputfsl_interruptible are
incorrect. For example putfsl_interruptible is :

#define putfsl_interruptible(val, id) \
asm volatile ("\n1:\n\tnput\t%0,rfsl" #id "\n\t" \
"addic\tr18,r0,0\n\t" \
"bnei\tr18,1b\n" \
: "=d" (val) :: "r18")

and it should be :

#define putfsl_interruptible(val, id) \
asm volatile ("\n1:\n\tnput\t%0,rfsl" #id "\n\t" \
"addic\tr18,r0,0\n\t" \
"bnei\tr18,1b\n" \
:: "d" (val) : "r18")

Obviously val is a input in the case of a 'put' and not
an output.


Another related question : In my code, when a replace all
non _interruptible versions by their _interruptible counter
parts, it doesn't behave as excpected anymore ...
Does theses version require some hw support ?



Sylvain



PS: I know I should submit a webcase but when I try to login
I just get "Server Error" ... and so I obviously can't even
submit a webcase about my problem of being unable to log in
into the webcase ...
 
On 15 Feb 2007 22:22:50 -0800, "werty" <werty@swissinfo.org> wrote:

In coax for 2.5 Ghz , for example ,
it WILL have a large diameter and
the center will have an exact dia and
ratio .. No substitutes .
Never heard of micro hardline, I guess. Or non-TEM propagation modes
in large-diameter coax.

John
 
On Apr 30, 4:18 pm, John Popelish <jpopel...@rica.net> wrote:
Default User wrote:
John Popelish wrote:

This is exactly the state diagram I drew before answering
the post. Nice work.

This has nothing to do with comp.lang.c. Please remove that newsgroup
from your distribution.

I don't know where the original poster is reading this thread.

You could make this discussion relevant by translating the
algorithm into C.
 
What has this newsgroup come to?
A series of 31 postings about a trivial design that (in Xilinx
parlance) fits into half a CLB.
There may be a dozen different implementations, but they are all
equally trivial.
Is there nothing better to discuss?
Peter Alfke

On Apr 30, 4:18 pm, John Popelish <jpopel...@rica.net> wrote:
Default User wrote:
John Popelish wrote:

This is exactly the state diagram I drew before answering
the post. Nice work.

This has nothing to do with comp.lang.c. Please remove that newsgroup
from your distribution.

I don't know where the original poster is reading this thread.

You could make this discussion relevant by translating the
algorithm into C.
 
Peter Alfke wrote:
What has this newsgroup come to?
A series of 31 postings about a trivial design that (in Xilinx
parlance) fits into half a CLB.
There may be a dozen different implementations, but they are all
equally trivial.
Is there nothing better to discuss?
Peter Alfke
Well, since you ask, there IS another thread on Xilinx Software Quality,
that could use some more postings, - no comment from Xilinx yet :) ?

-jg
 
Sorry to waste everyone's bandwidth, but it needs to be said (in good fun
because I may want to work for Xilinx someday)

Oh, BURN!


---Matthew Hicks


Peter Alfke wrote:

What has this newsgroup come to?
A series of 31 postings about a trivial design that (in Xilinx
parlance) fits into half a CLB.
There may be a dozen different implementations, but they are all
equally trivial.
Is there nothing better to discuss?
Peter Alfke
Well, since you ask, there IS another thread on Xilinx Software
Quality, that could use some more postings, - no comment from Xilinx
yet :) ?

-jg
 
John Larkin wrote:

On Sat, 14 Jul 2007 14:17:53 -0400, krw <krw@att.bizzzz> wrote:

In article <670i93lg7f1m4jhrcu9l48hjosvoo0jcdv@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...

[1] statistical analysis of some FPGA configuration patterns, leading
up to a fast, small compression/decompression algorithm. We need to
fit an application program and 6 megabits of Xilinx config stuff into
a 4 mbit Eprom.
I wrote the following two small C programs in 2001 for an Altera FPGA. You
can't get it much simpler, and from a whole bunch of config files it gets
about a 50% compression factor (plus or minus 15%).

You may need to set the 'most-common value' from 0x00 to 0xff - I don't have
a lot of Xilinx bitstreams here to check what's best.

You could actually check the 'golden' bitstream to count which byte value
occurs the most...

Good luck!



Ben
 
On Dec 4, 3:39 am, "Boudewijn Dijkstra" <boudew...@indes.com> wrote:
Op Mon, 03 Dec 2007 18:27:50 +0100 schreef rickman <gnu...@gmail.com>:

On Dec 3, 4:14 am, "Boudewijn Dijkstra" <boudew...@indes.com> wrote:
....snip...
Given that uncompressible data often resembles noise, you have to ask
yourself: what would be lost?

The message! Just because the message "resembles" noise does not mean
it has no information. In fact, just the opposite.

If you are compressing reliably transmitted pure binary data, then you are
absolutely right. But if there is less information per datum, like in an
analog TV signal, something that resembles noise might very well be noise.
But noise and signal that "resembles noise" are two different things.
You can characterize noise and send a description of it. But it is
*isn't* noise you have just turned part of your signal into noise. So
to take advantage of the fact that noise can be compressed by saying
"this is noise" requires that you separate the noise from the signal.
If you could do that, why would you even transmit the noise? You
wouldn't, you would remove it.

So the only type of "noise like" signal left is the part that *is*
signal and the part that can't be separated from the signal. Since
you can't distinguish between the two, you have to transmit them both
and suffer the inability to compress them.


Once you have a
message with no redundancy, you have a message with optimum
information content and it will appear exactly like noise.

Compression takes advantage of the portion of a message that is
predictable based on what you have seen previously in the message.
This is the content that does not look like noise. Once you take
advantage of this and recode to eliminate it, the message looks like
pure noise and is no longer compressible. But it is still a unique
message with information content that you need to convey.

....snip...

If you can identify the estimated compression beforehand and then split
the stream into a 'hard' part and an 'easy' part, then you have a way to
retain the average.

Doesn't that require sending additional information that is part of
the message?

Usually, yes.
How can you flag the "easy" (compressible) part vs. the "hard" part
without sending more bits?

On the average, this will add as much, if not more to
the message than you are removing...

Possibly.
As I describe below, compression only saves bits if your *average*
content has sufficient redundancy. So what does "possibly" mean?

If you are trying to compress data without loss, you can only compress
the redundant information. If the message has no redundancy, then it
is not compressible and, with *any* coding scheme, will require some
additional bandwidth than if it were not coded at all.

Think of your message as a binary number of n bits. If you want to
compress it to m bits, you can identify the 2**m most often
transmitted numbers and represent them with m bits. But the remaining
numbers can not be transmitted in m bits at all. If you want to send
those you have to have a flag that says, "do not decode this number".
Now you have to transmit all n or m bits, plus the flag bit. Since
there are 2**n-2**m messages with n+1 bits and 2**m messages with m+1
bits, I think you will find the total number of bits is not less then
just sending all messages with n bits. But if the messages in the m
bit group are much more frequent, then you can reduce your *average*
number of bits sent. If you can say you will *never* send the numbers
that aren't in the m bit group, then you can compress the message
losslessly in m bits.
 
Dave wrote:
Does anybody out there have a good methodology for determining your
optimal FPGA pinouts, for making PCB layouts nice, pretty, and clean?
The brute force method is fairly maddening. I'd be curious to hear if
anybody has any 'tricks of the trade' here.
The best way to get good pinouts is to finish a working
prototype of the hdl code before making the board.
I let place and route make the first cut unconstrained
and then clean up from there.

Also, just out of curiosity, how many of you do your own PCB layout,
Not me.
Whoever does this,
should do it all day long, every day.

It would certainly save us a lot of money to
buy the tools and do it ourselves,
The first pass might save some money,
but by the time you have a working board
you will be in the hole.

but it seems like laying out a
board out well requires quite a bit of experience, especially a 6-8
layer board with high pin count FPGA's.
That is correct.

-- Mike Treseler
 
On Apr 17, 1:04 pm, "Symon" <symon_bre...@hotmail.com> wrote:
Dave wrote:
Does anybody out there have a good methodology for determining your
optimal FPGA pinouts, for making PCB layouts nice, pretty, and clean?
The brute force method is fairly maddening. I'd be curious to hear if
anybody has any 'tricks of the trade' here.

Also, just out of curiosity, how many of you do your own PCB layout,
versus farming it out? It would certainly save us a lot of money to
buy the tools and do it ourselves, but it seems like laying out a
board out well requires quite a bit of experience, especially a 6-8
layer board with high pin count FPGA's.

We're just setting up a hardware shop here, and although I've been
doing FPGA and board schematics design for a while, it's always been
at a larger company with resources to farm the layout out, and we
never did anything high-speed to really worry about the board layout
too much. Thanks in advance for your opinions.

Dave

Hi Dave,
I layout my own PCBs. Unlike Mike T., I don't let the FPGA tools pick the
pinout. That said, it is important to consider carefully consider nets which
might have tight timing, e.g. clocks. I reason that there is a lot more
flexibility in the FPGA routing than on my PCB, and it's cheaper, so I can
save most time and money by being flexible in the pinout. I set the banks
the nets are to go on, and firm up the detailed pinout by swapping pins on
the FPGAs banks during the PCB layout process. You need some experience in
what your HDL code is gonna look like to be able to do this, but there you
go.
If you are adept at FPGA work, you'll find learning a PCB layout tool is a
piece of cake. I also use laser drilled microvias from layer 1 to 2, which
make the layout of big BGAs easier and saves layers. SI is easier also. The
price is usually less this way; the layers outweigh the via expense. You
don't need buried vias, IME.
Some of my FPGA buddies and I have had bad experiences with contract PCB
people. Sometimes they are knowledgeable and talented, but sometimes they
are dogmatic idiots, and sometimes they are useless. If you go the contract
route, it's important to closely monitor what they get up to so you find out
early doors which type they are.
Like you and Mike say, it depends a lot on your experience. If you've worked
closely with your layout guys in the past, that'll be a big help to you.
For sure, there's more than one way to skin a cat, but I enjoy PCB layout.
YMMV, good luck with it.
Cheers, Syms.
p.s. One benefit to laying out the PCB yourself is that it can help you spot
stupid mistakes in the circuit as you go. It forces you to look very closely
at the layout.
Depending on your PCB layout (and schematic capture) tools'
capabilities for defining constraints on pin swappability, you can
develop symbols that constrain IO pin swapping to meet the needs of
the design and/or FPGA. For example, we have symbols for FPGA's that
limit IO pin swapping to the same bank and other banks powered from
the same voltage rail. We lock down critical pins (global clock
inputs, etc.) and we have to "seed" the banks with their voltage
assignments, but after that, we are often able to let the PWB tool
auto-swap the FPGA pins, and then clean that up in layout. Then we
feed that pin out back to the FPGA design tools, and make sure we can
place and route the design in the FPGA while meeting timing. This is
often with a preliminary version of the FPGA code, but with relevant
IO structures in place.

Andy
 
Rich Grise wrote:
On Thu, 08 May 2008 07:37:44 +1200, Jim Granville wrote:

Do all your design decisions have the same carefull reasoning basis ?

Does all your writing show the same careful editing? >:-

Cheers!
Rich

Damn that comp.sci.electronics cross-posting!

That's the biggest thing I would fault bart for - cross-posting.

- John_H
 
recoder wrote:
Dear All,
Are there any Open source Core generators available? I am looking
for FIR and FFT Core generators but also wonder if open source
generators for other functions exist.
Thanks in Advance
Give opencores.org a try. I believe there is a FIR generator available.

Cheers,

Guenter
 
On May 18, 2:43 pm, "Robert Miles" <robertmi...@bellsouthNOSPAM.net>
wrote:
"Alex" <engin...@gmail.com> wrote in message

news:05bd2c8f-8660-49f3-8140-16baa048898f@n1g2000prb.googlegroups.com...



On May 18, 1:29 pm, Ben Bradley <ben_nospam_brad...@frontiernet.net
wrote:
In the newsgroups comp.arch.fpga, comp.lang.verilog,
comp.arch.embedded, sci.electronics.design and comp.lang.vhdl, I saw a
thread in which the following words were approximately attributed to
the following posters:

On Wed, 7 May 2008 17:19:31 -0700, "BobW"

nimby_NEEDS...@roadrunner.com> wrote:

"John Larkin" <jjlar...@highNOTlandTHIStechnologyPART.com> wrote in
message
news:eek:1e424d2h2uldtu4qm4589v667lu96hip8@4ax.com...
On Wed, 7 May 2008 12:19:40 -0700 (PDT), John_H
newsgr...@johnhandwork.com> wrote:

John Larkin wrote:

To Lattice:

We dumped Lattice over buggy compilers and dinky performance. Now
that
you're spamming our group, I'll make the ban permanent.

To the group:
Whenever anybody spams us, please

1. Blackball them as a vendor

2. Say bad things about their companies and products, preferably
with
lots of google-searchable keywords.

John

Was this really necessary?

Yes.

If there were technical webcasts from any of the big vendors, I'd like
to know about them though preferably more than 8 minutes beforehand.

Email them, and sign up for subscriptions to all their blurbs. A
confirmed opt-in email list is a good way to disseminate such info. If
they don't have such a list or don't announce events timely, tell them
you'll only consider sources from companies who do.

If the posts of this nature got to be more than a couple a month from
any one source I'd agree with the spam catagorization but it isn't
that frequent.

"Well, there's spam egg Lattice and spam, that's not got much spam
in it."

In other words, "they're not breaking the rules THAT often." With
the thousands of suppliers that provide products and services relevant
to even one of the cross-posted newsgroups, there could be hundreds of
posts per day of "legitimate" commercial posts.

I'm disappointed that you had problems with them in the past and won't
trust them for future designs because of your history; competition is
almost always good. But is it reason to be publicly vocal?

It's always good to be vocal about inappropriate posts. As for the
poster airing his previous problems with Lattice, perhaps they would
be better put in a blog or in a post where someone asks about using
Lattice, but that's a minor thing compared to the original post.

Kill-lists are easy to manage if bart's messages offend you.

I have better things to do than manage kill lists. I've got "better
things to do" than write this, but but c.a.e and especially s.e.d have
been useful to me a while back, and between all the spam and splorge
in recent years, it's a pleasant surprise to see these groups are
still viable. So I'm doing my little part to help keep them alive.

- John_H

If we don't discourage commercial posts, newsgroups will be flooded
with them. I can't kill-file the tens of thousands of companies who
would spam newsgroups if they thought it would pay off. So let's make
sure it *doesn't* pay off.

If they want to advertise, let them pay for it somewhere else.

John

For what it's worth, I agree with John.

It's a real shame that we, now, have to go out of our way to filter
commercial and sexual posts. There are proper places for both of those.
Usenet is not one of them, in my opinion.

Just to make a slight correction, THESE NEWSGROUPS (see crosspost
list at the top of my post) are not the proper place for commercial
posts. There are "marketplace" and "sex" newsgroups - if he's going to
spam, perhaps Bart Borosky of Lattice would do well to post to those
instead. There's no telling where a lonely engineer might go in his
spare time, and after all, "posting to Usenet is free" (as in both
beer AND speech).

Post, drink and speak responsibly.

Bob

Guys,

I read this thread after it was created and just wanted to ask a
couple of questions (while completely agreeing with the generally
accepted conclusion):
Was all this 'hot air' necessary?
Was all this bad-mouthing coming from some of the authors proper for
the group?

With respect,

Which group? It was crossposted to 5 different newsgroups, and is
unwelcome in at least one of them.
comp.arch.fpga -that is the group I had in mind (that's where I had
read this thread) . I'm reading the group's threads often (and for
many years) and in most cases find myself getting excellent
information, quite often enjoying quick wit and clever advice of many
contributors ... That's why seeing this kind of language (and I'll
repeat- I do not approve spam in no way) from _some_ authors is so
disappointing (I'm not talking about you, Robert :^) .

Why not to express your right-full indignation directly to the person
who generated the spam (when you can, as in this case) and/or modify
you spam filter if you'd like.(period)
Alex
 
On May 16, 4:52 pm, explore <chethanzm...@gmail.com> wrote:
Hi,
I have been running my designs on ISE 9.2i for a virtex-5 LX110t FPGA.
The time taken by the tool to complete a full synthesis and
implementation is a little over 3 hours on a Core2 Quad CPU running at
2.4 GHz with 2GB of RAM and Windows XP Pro 32-bit edition. I have
tried using ISE 10.1 and have observed that the time taken to run the
same design is less than 2 hrs. I have read the threads on Xilinx
about using more memory, I will be upgrading my memory to 3 GB or
more. I would like to get some recommendations for a system
configuration in terms of the best suitable processor, memory and any
other useful configuration to bring down the synthesis-map-par run-
time. The other discussion threads that I went through were either old
or did not point to an optimal configuration. Your inputs will be
helpful and is highly appreciated.

Thanks!
First off, a multiprocessor/core system won't do you much since the
tools do *NOT* support any type of SMP currently. And if you read
through the previous discussions about it, you'll see there's a lot of
feet dragging with trying to actually implement it as well.

Secondly, from what I've seen here's what you really need:
a) a TON of RAM with a 64bit OS to actually support it.
b) the fastest processor with the most cache: probably one of the nice
big Xeon's.

I have been working on the Core2Duo 6600's for my designs and they've
been alright. I have a Xeon w/ 4MB of cache sitting in the lab w/ RH
and I have been meaning to get around to testing a build on it, and
comparing it between my P4D-3.2Ghz and the Core2Duo 6600. If I ever
get around to it, I'll post the results.

-- Mike
 
Andrew Smallshaw <andrews@sdf.lonestar.org> writes:
For all the talk of enhancing the user's experience it seems obvious
to me that MS don't give a shit about users.
Have you ever met Tux? :)
--
% Randy Yates % "Maybe one day I'll feel her cold embrace,
%% Fuquay-Varina, NC % and kiss her interface,
%%% 919-577-9882 % til then, I'll leave her alone."
%%%% <yates@ieee.org> % 'Yours Truly, 2095', *Time*, ELO
http://www.digitalsignallabs.com
 
This is also an area where Microsoft have completely lost the plot.
Since Windows 95 every major release of Windows has been accompanied
by a new interface. Applications are even worse - I don't know
how many style of toolbar have been played with over the last 15
years. Microsoft always make great play of the new interface but
who exactly does it benefit? Users are forced to learn new interfaces
every upgrade and application developers are forced to 'upgrade'
their programs with the new UI or risk being considered outdated.

The only people I can see benefiting are Microsoft themseleves (it
provides a very obvious reason to upgrade, even if it does lack
clear benefits) and hardware manufacturers (the upgrade needs newer
faster hardware). For all the talk of enhancing the user's experience
it seems obvious to me that MS don't give a shit about users. All
that matters is ensuring that the revenue keeps coming in from
repeated meaningless upgrades.
An excellent example of this is Office 2007. The UI for all the major
applications have been changed completely with Ribbon bars, etc. It
takes ages to work out where on earth really simple things are and they
are now buried in (even more) obscure locations. Microsoft's
advertising will have you believe "easy" but I disagree.

Andrew
 

Welcome to EDABoard.com

Sponsor

Back
Top