Grand Apagon - Electricity (not) in Spain...

On 8/05/2025 2:48 am, Liz Tuddenham wrote:
Bill Sloman <bill.sloman@ieee.org> wrote:

On 6/05/2025 5:13 pm, Liz Tuddenham wrote:
Bill Sloman <bill.sloman@ieee.org> wrote:

On 6/05/2025 5:04 am, Liz Tuddenham wrote:
Bill Sloman <bill.sloman@ieee.org> wrote:

On 6/05/2025 2:35 am, Liz Tuddenham wrote:
john larkin <jl@glen--canyon.com> wrote:

snip

Even politicians can be relied on to be less stupid than that.
[...]

There is no evidence to support your claim at the moment.

There is negative evidence - there haven\'t been enough deaths that
anyone can ascribe to political stupidity.

The conversion to mains-dependency is nowhere near completion yet and
the change has been so rapid that there haven\'t been any major power
cuts during that time.

It will happen.

Liz Tuddenham, prophet.

Anything that can go wrong - will.

That\'s what the uninterruptible power supply business exists to deal with.

> Anything that can\'t go wrong - will eventually.

But redundancy can make it very unlikely that they\'ll all go wrong at
the same time.

Anything that doesn\'t go wrong eventually hasn\'t been tested long
enough.

Perfectly true, but since the human life-time is finite, it may not be
all that relevant.

--
Bill Sloman, Sydney
 
On 08/05/2025 00:01, Don Y wrote:
Much like me having carrier doesn\'t tell me the extent of
my \"reach\", here.

Well, the power outage was \"total\". :-D

Yeah, but you don\'t know which services (up the chain) may
have their own *local*/private backup systems.  E.g., I doubt
your hospitals were without power (?)  The extent of backup
beyond that would be something you\'d have to know, in advance.

If the fibre goes direct to the exchange, they had backup power.
However, if the distance is great and they have to reconstruct the
signal with some kind of optical amplifier, then I don\'t know. The
distance is about 2.5 Km.

But where can exchange traffic go?  See what I mean?  Anyone that
you want to contact (and everyone along the way) must be \"up\".

That was the original point of ARPANET then EPSS and later the internet.
Packet switching means that any route to the destination at all will do.

I\'m told that my fibre feed is passive optical connectors and splices
all the way back the regional exchange about 12 miles away. My local
exchange was about 5 miles away and a so-called exchange only direct
line (which meant that ADSL 2+ was the limit for me prior to FTTP).
My mobile phone worked all the day, I could send and receive whatsapp
messages.

Are those processed \"locally\"?

Mobile phone masts here typically have a lifetime of about 8-40 hours
after power failure depending on how heavily they are being used.
Backhaul presumably is optical or microwave.

Most powercuts tend to be fairly local round here - a regional powercut
or a national one requires something truly catastrophic to happen.

I can only recall one UK powercut in that league in the past half
century (August 9 2019). Of course it directly affected the densely
populated affluent regions London and the South East. Therefore it was
much more newsworthy than if it had affected the remote Scottish
Highlands where weather induced powercuts are quite common.

The recent big one at Heathrow didn\'t affect all that many people
although it did take down the whole airport which shows remarkably bad
contingency planning - it should have had supply redundancy and the
ability to switchover to it before the diesel generators ran out of
fuel. Heads should roll over them having to shut down completely.

I have a small computer doing server things, and it tried to email me
as soon as the UPS said it was running on battery. That email did not
reach me till the power came back; this could be that the fibre went
OOS, or that the UPS at my router went down instantly. I do not know.

Doesn\'t your UPS deliver log messages (to a syslog server or data
dumps to an FTP service)?

I have each of mine configured to give me summaries of power consumption
and line conditions each minute.  And, use a syslogd on that same server.

I only log external power failures. Kitchen appliance clocks all reset
when we lose power for more than a couple of seconds.

I\'m considering replacing the UPS at my router. Some UPS \"destroy\" the
battery too fast.

Yes.  Rather than spend time investigating it, I\'ve taken the approach
of just rescuing batteries to replace those that have been \"cooked\".

That is a feature of UPS design that specsmanship to get the longest run
time for the sales datasheet means that they cook their batteries. I
have seen them swell to the point of bursting inside a UPS. Thick rubber
gloves needed to remove the remains. Support metalwork was a real
corroded rusty mess but electronics above it remained OK.
I suspect the problem (rationalized by the manufacturers) is trying to
bring the battery back to full charge ASAP -- as well as keeping the
highest state of charge that the battery can support.

Which taken to extremes is very bad for battery life.

Charging at a slower rate and to a lower float voltage would
compromise the UPS\'s availability -- but provide less maintenance costs
(of course, the manufacturer wants to sell you batteries, so you
can see where their priorities will lie!)

They really think I\'m going to buy their vastly overpriced replacements?

--
Martin Brown
 
But where can exchange traffic go?  See what I mean?  Anyone that
you want to contact (and everyone along the way) must be \"up\".

That was the original point of ARPANET then EPSS and later the internet. Packet
switching means that any route to the destination at all will do.

But that assumes there *is* a series of hops that can get you \"there\"...
wherever \"there\" happens to be. In a nationwide outage, what chance
that everything EXCEPT some critical bit of comms gear is affected?

I\'m told that my fibre feed is passive optical connectors and splices all the
way back the regional exchange about 12 miles away. My local exchange was about
5 miles away and a so-called exchange only direct line (which meant that ADSL
2+ was the limit for me prior to FTTP).

So, you rely on the exchange having upstream connectivity. Along
with the fiber link TO the exchange.

My mobile phone worked all the day, I could send and receive whatsapp messages.

Are those processed \"locally\"?

Mobile phone masts here typically have a lifetime of about 8-40 hours after
power failure depending on how heavily they are being used. Backhaul presumably
is optical or microwave.

So, also subject to outage.

Most powercuts tend to be fairly local round here - a regional powercut or a
national one requires something truly catastrophic to happen.

I can only recall one UK powercut in that league in the past half century
(August 9 2019). Of course it directly affected the densely populated affluent
regions London and the South East. Therefore it was much more newsworthy than
if it had affected the remote Scottish Highlands where weather induced
powercuts are quite common.

I don\'t think I\'ve ever (regardless of where I\'ve lived) experienced
a deliberate power cut. A drunk may take out a telephone pole or
a branch may fall on some high tension wires but no one has ever
said \"sorry, we\'re turning the lights out\" (for whatever reason)

The recent big one at Heathrow didn\'t affect all that many people although it
did take down the whole airport which shows remarkably bad contingency planning
- it should have had supply redundancy and the ability to switchover to it
before the diesel generators ran out of fuel. Heads should roll over them
having to shut down completely.

Fukishima?

I have a small computer doing server things, and it tried to email me as
soon as the UPS said it was running on battery. That email did not reach me
till the power came back; this could be that the fibre went OOS, or that the
UPS at my router went down instantly. I do not know.

Doesn\'t your UPS deliver log messages (to a syslog server or data
dumps to an FTP service)?

I have each of mine configured to give me summaries of power consumption
and line conditions each minute.  And, use a syslogd on that same server.

I only log external power failures. Kitchen appliance clocks all reset when we
lose power for more than a couple of seconds.

Each UPS has a link to my syslogd (the switches and that server being backed
up in the event of power outages).

I additionally configure them to report their loads every minute (to get
a feel for where I\'m \"doing work\" as well as how heavily each is taxed.

I\'m considering replacing the UPS at my router. Some UPS \"destroy\" the
battery too fast.

Yes.  Rather than spend time investigating it, I\'ve taken the approach
of just rescuing batteries to replace those that have been \"cooked\".

That is a feature of UPS design that specsmanship to get the longest run time
for the sales datasheet means that they cook their batteries. I have seen them
swell to the point of bursting inside a UPS. Thick rubber gloves needed to
remove the remains. Support metalwork was a real corroded rusty mess but
electronics above it remained OK.

Yup. They have a rationalization, though -- they are trying to provide the
highest availability. Else, how much availability do you sacrifice to
maximize battery life? Do you then start specifying battery life as a
primary selection criteria?

[Most SOHO users buy a UPS -- thinking they are being \"professional\" -- and
then discard it when the battery needs replacing and they discover the
costs charged by the UPS manufacturer -- or local \"battery stores\"]

I suspect the problem (rationalized by the manufacturers) is trying to
bring the battery back to full charge ASAP -- as well as keeping the
highest state of charge that the battery can support.

Which taken to extremes is very bad for battery life.

Of course. But, they are in the PRIMARY business of selling batteries,
not UPSs!

Charging at a slower rate and to a lower float voltage would
compromise the UPS\'s availability -- but provide less maintenance costs
(of course, the manufacturer wants to sell you batteries, so you
can see where their priorities will lie!)

They really think I\'m going to buy their vastly overpriced replacements?

If you were a business, it would just be a maintenance expense.
You would budget for it. If SOHO, you\'d likely replace it at
most once and then realize \"Gee, I haven\'t NEEDED this in the
past three years so why am I spending more money on it?\"

With the exception of multi-user servers, individual workstations
usually have auto-backup provisions *in* the key applications.
And, in the event of an outage (even if the machine stays up),
the user is usually distracted by the rest of the house/office
going black; is ~15 minutes of uptime going to be enough if the
user isn\'t AT the machine when power fails?

No one has yet to address the market where TCO is the driving
criteria.
 
On 08/05/2025 12:49, Don Y wrote:
But where can exchange traffic go?  See what I mean?  Anyone that
you want to contact (and everyone along the way) must be \"up\".

That was the original point of ARPANET then EPSS and later the
internet. Packet switching means that any route to the destination at
all will do.

But that assumes there *is* a series of hops that can get you \"there\"...
wherever \"there\" happens to be.  In a nationwide outage, what chance
that everything EXCEPT some critical bit of comms gear is affected?

Nationwide outages should be exceptionally rare. Never had one in the UK
but the way they are going one winter\'s day it will happen.

I\'m told that my fibre feed is passive optical connectors and splices
all the way back the regional exchange about 12 miles away. My local
exchange was about 5 miles away and a so-called exchange only direct
line (which meant that ADSL 2+ was the limit for me prior to FTTP).

So, you rely on the exchange having upstream connectivity.  Along
with the fiber link TO the exchange.

That is usually a given since the regional and further up the chain
concentrators all have UPS and diesel generators or fuel cell supply.

My mobile phone worked all the day, I could send and receive
whatsapp messages.

Are those processed \"locally\"?

Mobile phone masts here typically have a lifetime of about 8-40 hours
after power failure depending on how heavily they are being used.
Backhaul presumably is optical or microwave.

So, also subject to outage.

But the central nodes usually have better battery backup and/or
generators than the local nodes. Local nodes die first according to how
much traffic they have to handle.

Most powercuts tend to be fairly local round here - a regional
powercut or a national one requires something truly catastrophic to
happen.

I can only recall one UK powercut in that league in the past half
century (August 9 2019). Of course it directly affected the densely
populated affluent regions London and the South East. Therefore it was
much more newsworthy than if it had affected the remote Scottish
Highlands where weather induced powercuts are quite common.

I don\'t think I\'ve ever (regardless of where I\'ve lived) experienced
a deliberate power cut.  A drunk may take out a telephone pole or
a branch may fall on some high tension wires but no one has ever
said \"sorry, we\'re turning the lights out\" (for whatever reason)

They do that routinely where I live once a year for trimming trees that
might otherwise short out live supply lines or worse fall onto them.

I wouldn\'t describe that 2019 powercut as deliberate either it was a
huge MFU caused by a single lightning strike to an insignificant power
plant that lead to a cascade network failure.

We typically lose power a couple of times a year due to very high winds
toppling poles and/or the sort of snow that sticks onto trees and makes
them break. The mains poles here are now antique. Installed ~1950\'s and
the bases are rotten. Unsafe for linesmen to climb and marked as such.

The recent big one at Heathrow didn\'t affect all that many people
although it did take down the whole airport which shows remarkably bad
contingency planning - it should have had supply redundancy and the
ability to switchover to it before the diesel generators ran out of
fuel. Heads should roll over them having to shut down completely.

Fukishima?

Genuine natural disaster beyond what the designers had considered. They
almost got away with it but didn\'t. Tsunami are absolutely terrifying.
Another one when I was in Japan in the hours of darkness was reported as
2m (which was when the gauge stopped transmitting). The next morning
there was seaweed hanging off supergrid pylon wires.

I\'m considering replacing the UPS at my router. Some UPS \"destroy\"
the battery too fast.

Yes.  Rather than spend time investigating it, I\'ve taken the approach
of just rescuing batteries to replace those that have been \"cooked\".

That is a feature of UPS design that specsmanship to get the longest
run time for the sales datasheet means that they cook their batteries.
I have seen them swell to the point of bursting inside a UPS. Thick
rubber gloves needed to remove the remains. Support metalwork was a
real corroded rusty mess but electronics above it remained OK.

Yup.  They have a rationalization, though -- they are trying to provide the
highest availability.  Else, how much availability do you sacrifice to
maximize battery life?  Do you then start specifying battery life as a
primary selection criteria?

I think they probably could back off the fast recharge a bit. I\'m always
nervous of going back on again too soon after power is restored (even
though my systems are reasonably fault tolerant). Sometimes the mains
restoration goes on and off several times a few seconds apart if there
are still other transient leak to ground faults on the lines.

[Most SOHO users buy a UPS -- thinking they are being \"professional\" -- and
then discard it when the battery needs replacing and they discover the
costs charged by the UPS manufacturer -- or local \"battery stores\"]

I suspect the problem (rationalized by the manufacturers) is trying to
bring the battery back to full charge ASAP -- as well as keeping the
highest state of charge that the battery can support.

Which taken to extremes is very bad for battery life.

Of course.  But, they are in the PRIMARY business of selling batteries,
not UPSs!

A bit like printers then.

Charging at a slower rate and to a lower float voltage would
compromise the UPS\'s availability -- but provide less maintenance costs
(of course, the manufacturer wants to sell you batteries, so you
can see where their priorities will lie!)

They really think I\'m going to buy their vastly overpriced replacements?

If you were a business, it would just be a maintenance expense.
You would budget for it.  If SOHO, you\'d likely replace it at
most once and then realize \"Gee, I haven\'t NEEDED this in the
past three years so why am I spending more money on it?\"

With the exception of multi-user servers, individual workstations
usually have auto-backup provisions *in* the key applications.
And, in the event of an outage (even if the machine stays up),
the user is usually distracted by the rest of the house/office
going black; is ~15 minutes of uptime going to be enough if the
user isn\'t AT the machine when power fails?

No one has yet to address the market where TCO is the driving
criteria.

To some extent it is an insurance policy to not lose what I\'m working on
if the power does go down suddenly. Despite having theoretical lightning
protect as well I also shutdown when there are thunderstorms about.

I saw what a big lightning strike to our works building did to the
switchboard and mainframe. The surge protection devices on a big chunky
copper bus bar saved themselves by allowing transients to fry all of the
terminal driver boards. The phone lines were just a sooty shadow on the
wall and it blew the clip on covers off the cable way.

About once a decade we get lightning to tree strikes within 100m. It
usually fries bedside clocks and modems (although mine survived OK last
time). This was despite a 1\" calorific spark jumping off it.

--
Martin Brown
 
On 2025-05-08 13:49, Don Y wrote:

....

I can only recall one UK powercut in that league in the past half
century (August 9 2019). Of course it directly affected the densely
populated affluent regions London and the South East. Therefore it was
much more newsworthy than if it had affected the remote Scottish
Highlands where weather induced powercuts are quite common.

I don\'t think I\'ve ever (regardless of where I\'ve lived) experienced
a deliberate power cut.  A drunk may take out a telephone pole or
a branch may fall on some high tension wires but no one has ever
said \"sorry, we\'re turning the lights out\" (for whatever reason)

Not the same, but past Monday someone stole the signalling cable in the
high speed railway to Andalussia, leaving the entire line OOS. I heard
that trains were authorized to run at 40 Km/h, so that they could see
the other train in time and tail it. Not sure it worked.

The authorities talked of sabotage. The price of the cable when new is
not even a thousand euros, but the damage to thousands of people is huge.

....

I\'m considering replacing the UPS at my router. Some UPS \"destroy\"
the battery too fast.

Yes.  Rather than spend time investigating it, I\'ve taken the approach
of just rescuing batteries to replace those that have been \"cooked\".

That is a feature of UPS design that specsmanship to get the longest
run time for the sales datasheet means that they cook their batteries.
I have seen them swell to the point of bursting inside a UPS. Thick
rubber gloves needed to remove the remains. Support metalwork was a
real corroded rusty mess but electronics above it remained OK.

Yup.  They have a rationalization, though -- they are trying to provide the
highest availability.  Else, how much availability do you sacrifice to
maximize battery life?  Do you then start specifying battery life as a
primary selection criteria?

[Most SOHO users buy a UPS -- thinking they are being \"professional\" -- and
then discard it when the battery needs replacing and they discover the
costs charged by the UPS manufacturer -- or local \"battery stores\"]

25€. A 9Ah item, high discharge rate.


I suspect the problem (rationalized by the manufacturers) is trying to
bring the battery back to full charge ASAP -- as well as keeping the
highest state of charge that the battery can support.

Which taken to extremes is very bad for battery life.

Of course.  But, they are in the PRIMARY business of selling batteries,
not UPSs!

Ugh.

And having disgruntled customers.

Charging at a slower rate and to a lower float voltage would
compromise the UPS\'s availability -- but provide less maintenance costs
(of course, the manufacturer wants to sell you batteries, so you
can see where their priorities will lie!)

They really think I\'m going to buy their vastly overpriced replacements?

If you were a business, it would just be a maintenance expense.
You would budget for it.  If SOHO, you\'d likely replace it at
most once and then realize \"Gee, I haven\'t NEEDED this in the
past three years so why am I spending more money on it?\"

With the exception of multi-user servers, individual workstations
usually have auto-backup provisions *in* the key applications.
And, in the event of an outage (even if the machine stays up),
the user is usually distracted by the rest of the house/office
going black; is ~15 minutes of uptime going to be enough if the
user isn\'t AT the machine when power fails?

You need software monitoring to hibernate or power off the machine.

No one has yet to address the market where TCO is the driving
criteria.

--
Cheers, Carlos.
 
On 08/05/2025 14:41, Carlos E.R. wrote:
On 2025-05-08 14:44, Carlos E.R. wrote:

UPS Topology: Standby (Offline) or Standby (Offline)

{Phrase translated by DeepL, so inconsistent: Topología UPS: En espera
(Fuera de línea) o Standby (Offline)}

There are (at least) two major UPS topologies in play.

One is where the power to the protected device is always made by the
inverter and maintained at the correct main voltage irrespective of
input voltage to the UPS. Useful in places where the local mains supply
voltage goes up and down a lot depending on load.

The other is a pass through of input mains voltage to the load under
normal conditions and an isolation relay plus cold start of the inverter
within a couple of cycles of the supply failure. This is more than good
enough for PCs. Mine can withstand a 1s blackout unprotected without any
difficulty but kitchen white goods clocks cannot.

I do not see a reference to that \"topology\" except at the vendor. But it
says that the expected battery life is 4 years.


--
Martin Brown
 
Martin Brown <\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 08/05/2025 13:43, Liz Tuddenham wrote:
Don Y <blockedofcourse@foo.invalid> wrote:


I don\'t think I\'ve ever (regardless of where I\'ve lived) experienced
a deliberate power cut. A drunk may take out a telephone pole or
a branch may fall on some high tension wires but no one has ever
said \"sorry, we\'re turning the lights out\" (for whatever reason)

That was exactly what happened in the UK in the early 1970s; we had a
rota of power cuts lasting 4 hours each. I made up an automatic
lighting unit based on a car battery for my parents. It used relays to
switch on when the mains went, then recharge at a fast rate until the
battery voltage rose high enough, then trickle charge.

That was during the various coal miners strikes which were at their peak
then. Local newspapers had rotas for planned supply cuts.

ISTR there was still the odd planned power cut even in the late 1970\'s
but they became increasingly rare after that.

A \"Disconnection Rota\" marker has recently started appearing on my
electricity bill.


--
~ Liz Tuddenham ~
(Remove the \".invalid\"s and add \".co.uk\" to reply)
www.poppyrecords.co.uk
 
On 1/05/2025 1:05 am, Liz Tuddenham wrote:
Bill Sloman <bill.sloman@ieee.org> wrote:

On 30/04/2025 8:41 pm, Carlos E.R. wrote:
On 2025-04-30 11:59, Liz Tuddenham wrote:
Bill Sloman <bill.sloman@ieee.org> wrote:

... pumped hydro storage has the spinning
turbines, but grid scale batteries have invereters, which can reacta lot
faster than any spinning turbine,

I thought the stabilising effect of a spinning turbine was because it
*didn\'t* react quickly.

The grid frequency begins to fall so energy from the moving parts is
converted to electrical power which is fed into the grid to increase.
the frequency.  This results in a loss of stored mechanical energy which
causes the turbine to begin slowing down - which is detected by the
control system and used to feed more water/gas/steam into the turbine so
its speed is returned to normal.

I understand that the turbine doesn\'t actually slow down, because the
generator starts working as a synchronous motor drawing energy from the
network instead; this is detected by the control system and feeds more
water/gas/steam, etc.

It doesn\'t slow down much, but there\'s no such thing as instantaneous
feedback - you have to an input change before you can start correcting
the output.

As long as the network keeps the frequency.

The \"network\" can\'t keep the frequency - it\'s the corrections that keep
the low term frequency stable

The interface between the stored mechanical energy and the electrical
energy demand has an almost instant response and is inherently stable
without needing elaborate control algorithms.

But the stored mechanical energy in the spinning rotor can only get fed
into the grid if the rotor slows down.

The generator has to have a control system to control the power being
feed into the rotor to keep it spinning at the same speed while more
energy is being extracted from it.

There\'s nothing magically stable about that kind of control system - it
has to be designed to stable like any other feedback mechanism.

There are two mechanisms at work here:

1) The coupling between the rotating machine and the grid, which is
virtually instantaneous and extracts mechanical energy from the rotating
\'store\' without any special control system. It is inherently stable.

Only in the sense that it doesn\'t do anything unpredictable.

2) The coupling between the rotating machine and the \'prime mover\'
power source, which puts mechanical energy into the rotating \'store\'.
This is slower to respond and does need careful control to keep it
stable.

The point about inverter-based controls taking energy from grid-scale
batteries (or feeding it into them) is that they can operate much
faster. They can force the voltage at their connection to the grid to
sinusoidal on a millisecond to millisecond basis.

The difference between these two shows up as a change in the speed of
rotation.

If the current output from the rotating machinery dominates the energy
being feed into the grid, that will happen. If most of the energy is
coming from solar cells through inverters that isn\'t a useful way of
looking at what\'s going on.

--
Bil Sloman, Sydney
 
On 08/05/2025 17:19, Liz Tuddenham wrote:
Martin Brown <\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 08/05/2025 13:43, Liz Tuddenham wrote:
Don Y <blockedofcourse@foo.invalid> wrote:


I don\'t think I\'ve ever (regardless of where I\'ve lived) experienced
a deliberate power cut. A drunk may take out a telephone pole or
a branch may fall on some high tension wires but no one has ever
said \"sorry, we\'re turning the lights out\" (for whatever reason)

That was exactly what happened in the UK in the early 1970s; we had a
rota of power cuts lasting 4 hours each. I made up an automatic
lighting unit based on a car battery for my parents. It used relays to
switch on when the mains went, then recharge at a fast rate until the
battery voltage rose high enough, then trickle charge.

That was during the various coal miners strikes which were at their peak
then. Local newspapers had rotas for planned supply cuts.

ISTR there was still the odd planned power cut even in the late 1970\'s
but they became increasingly rare after that.

A \"Disconnection Rota\" marker has recently started appearing on my
electricity bill.

You still get paper ones? I confess I haven\'t actually looked at my
virtual \"paper\" electricity bill for ages. I use the online portal to
check usage and how much they have taken in DD for prepayment.

Electricity companies have a nasty habit of always increasing DDs when
the price goes up and never reducing them so that if you don\'t keep an
eye on it they can borrow your money. Officially to smooth out winter.

I\'ll take a look on mine. I only have water and sewage as physical paper
bills to satisfy what laughingly passes for \"proof of identity\" in the
UK! The other utilities are on DD to get preferential rates.

I can\'t see why we would ever be inline for rationing power cuts though
we are pretty much on the nexus where under all foreeable scenarios we
have an oversupply of power that can\'t move any further south!

It might fail for over voltage though.

--
Martin Brown
 
On 5/8/25 5:43 AM, Liz Tuddenham wrote:
Don Y <blockedofcourse@foo.invalid> wrote:


I don\'t think I\'ve ever (regardless of where I\'ve lived) experienced
a deliberate power cut. A drunk may take out a telephone pole or
a branch may fall on some high tension wires but no one has ever
said \"sorry, we\'re turning the lights out\" (for whatever reason)

That was exactly what happened in the UK in the early 1970s; we had a
rota of power cuts lasting 4 hours each. I made up an automatic
lighting unit based on a car battery for my parents. It used relays to
switch on when the mains went, then recharge at a fast rate until the
battery voltage rose high enough, then trickle charge.

...

When the coal miner\'s strike power usage reductions were in effect I was
working at Marconi-Elliott in Borehamwood. We were not allowed to have
the lights or heating on but it was permitted to use test equipment so
we would huddle around our Tektronix 547 scopes to keep warm, they used
to put out a lot of heat.
 
On 2025-05-08 16:18, Martin Brown wrote:
On 08/05/2025 14:41, Carlos E.R. wrote:
On 2025-05-08 14:44, Carlos E.R. wrote:

UPS Topology: Standby (Offline) or Standby (Offline)

{Phrase translated by DeepL, so inconsistent: Topología UPS: En espera
(Fuera de línea) o Standby (Offline)}

There are (at least) two major UPS topologies in play.

One is where the power to the protected device is always made by the
inverter and maintained at the correct main voltage irrespective of
input voltage to the UPS. Useful in places where the local mains supply
voltage goes up and down a lot depending on load.

The other is a pass through of input mains voltage to the load under
normal conditions and an isolation relay plus cold start of the inverter
within a couple of cycles of the supply failure. This is more than good
enough for PCs. Mine can withstand a 1s blackout unprotected without any
difficulty but kitchen white goods clocks cannot.

Ah! Yes, I remember now, I have seen this before. It was the
terminology that was confusing me.

Thanks.

I do not see a reference to that \"topology\" except at the vendor. But
it says that the expected battery life is 4 years.

--
Cheers, Carlos.
 
On 5/8/2025 5:43 AM, Martin Brown wrote:
On 08/05/2025 12:49, Don Y wrote:
But where can exchange traffic go?  See what I mean?  Anyone that
you want to contact (and everyone along the way) must be \"up\".

That was the original point of ARPANET then EPSS and later the internet.
Packet switching means that any route to the destination at all will do.

But that assumes there *is* a series of hops that can get you \"there\"...
wherever \"there\" happens to be.  In a nationwide outage, what chance
that everything EXCEPT some critical bit of comms gear is affected?

Nationwide outages should be exceptionally rare. Never had one in the UK but
the way they are going one winter\'s day it will happen.

Yes, but they *do* happen. As well as events that take out large
geographical areas (e.g., 9/11).

One can\'t protect against (e.g.) 10 sigma occurrences as it\'s not economical.
But, people still have to potentially deal with them.

Most powercuts tend to be fairly local round here - a regional powercut or a
national one requires something truly catastrophic to happen.

I can only recall one UK powercut in that league in the past half century
(August 9 2019). Of course it directly affected the densely populated
affluent regions London and the South East. Therefore it was much more
newsworthy than if it had affected the remote Scottish Highlands where
weather induced powercuts are quite common.

I don\'t think I\'ve ever (regardless of where I\'ve lived) experienced
a deliberate power cut.  A drunk may take out a telephone pole or
a branch may fall on some high tension wires but no one has ever
said \"sorry, we\'re turning the lights out\" (for whatever reason)

They do that routinely where I live once a year for trimming trees that might
otherwise short out live supply lines or worse fall onto them.

I don\'t recall it in any of the places that I\'ve lived with overhead
services. Here, things are below grade.

In the former case, outages were frequent enough (one place they occurred
monthly) that any sort of PM would be lost in the noise. In the latter,
it\'s decades between *clusters* of failures.

I wouldn\'t describe that 2019 powercut as deliberate either it was a huge MFU
caused by a single lightning strike to an insignificant power plant that lead
to a cascade network failure.

We typically lose power a couple of times a year due to very high winds
toppling poles and/or the sort of snow that sticks onto trees and makes them
break. The mains poles here are now antique. Installed ~1950\'s and the bases
are rotten. Unsafe for linesmen to climb and marked as such.

I don\'t know how our \"area\" is fed as it has to be overhead *somewhere*.
But. most of the high tension lines are on metal support towers or
metal \"poles\" (18-24\" diameter at base)

The other side of town has overhead service and frequently is down
for a day or so as high winds play dominoes with the poles.

The recent big one at Heathrow didn\'t affect all that many people although
it did take down the whole airport which shows remarkably bad contingency
planning - it should have had supply redundancy and the ability to
switchover to it before the diesel generators ran out of fuel. Heads should
roll over them having to shut down completely.

Fukishima?

Genuine natural disaster beyond what the designers had considered. They almost

Like grid failures the designers hadn\'t considered? :>

got away with it but didn\'t. Tsunami are absolutely terrifying. Another one
when I was in Japan in the hours of darkness was reported as >2m (which was
when the gauge stopped transmitting). The next morning there was seaweed
hanging off supergrid pylon wires.

I\'m considering replacing the UPS at my router. Some UPS \"destroy\" the
battery too fast.

Yes.  Rather than spend time investigating it, I\'ve taken the approach
of just rescuing batteries to replace those that have been \"cooked\".

That is a feature of UPS design that specsmanship to get the longest run
time for the sales datasheet means that they cook their batteries. I have
seen them swell to the point of bursting inside a UPS. Thick rubber gloves
needed to remove the remains. Support metalwork was a real corroded rusty
mess but electronics above it remained OK.

Yup.  They have a rationalization, though -- they are trying to provide the
highest availability.  Else, how much availability do you sacrifice to
maximize battery life?  Do you then start specifying battery life as a
primary selection criteria?

I think they probably could back off the fast recharge a bit. I\'m always
nervous of going back on again too soon after power is restored (even though my
systems are reasonably fault tolerant). Sometimes the mains restoration goes on
and off several times a few seconds apart if there are still other transient
leak to ground faults on the lines.

Our outages tend to be clean on/off/on events as they are simple equipment
failures. One cable segment replacement was followed by a *second*
when the line was reenergized but that was unusual. And, the second
failure was almost immediate.

I have every machine (save the 24/7/365 \"services\" box) set to stay
off in the event of power failure. I don\'t need to have them all
spin up of their own accord if unattended.

Of course.  But, they are in the PRIMARY business of selling batteries,
not UPSs!

A bit like printers then.

There are depressingly many \"toilet paper\" products (products where
the main business is selling toilet paper, NOT the \"free dispensers\").
One firm sells distilled water in tiny vials. The vials are *chipped*
to ensure you don\'t substitute some other distilled water for THEIR
distilled water. <rolls eyes>

[Yes, of course there are likely standards of purity ionvolved
but I\'m sure the real issue is continued revenue stream]

Some years ago, I designed a box; lots of pressure to get the
costs down (about $300 DM+DL). The boxes were \"sold\" for $6000.
But, all were given away -- to sell chemical reagents!

An acquaintance makes his living buying drug stores (pharmacies).
All of the content is treated as crap and resold for pennies
on the dollar. What he\'s after is the prescriptions that the
store handles. So, when you laugh at the outrageous prices
that they charge for common items (that you can easily buy
elsewhere), know that those aren\'t the real profit drivers!

With the exception of multi-user servers, individual workstations
usually have auto-backup provisions *in* the key applications.
And, in the event of an outage (even if the machine stays up),
the user is usually distracted by the rest of the house/office
going black; is ~15 minutes of uptime going to be enough if the
user isn\'t AT the machine when power fails?

No one has yet to address the market where TCO is the driving
criteria.

To some extent it is an insurance policy to not lose what I\'m working on if the
power does go down suddenly. Despite having theoretical lightning protect as
well I also shutdown when there are thunderstorms about.

That hasn\'t been a problem, here. Whichever machines happen to be powered
up will just go in and out of \"sleep\" based on how long I\'ve been \"away\"
from them. Booting is costly (add-in cards with legacy BIOS that
have to come up sequentially -- usually probing external busses in the
process). So, I\'d rather just let them idle at a lower power.

I saw what a big lightning strike to our works building did to the switchboard
and mainframe. The surge protection devices on a big chunky copper bus bar
saved themselves by allowing transients to fry all of the terminal driver
boards. The phone lines were just a sooty shadow on the wall and it blew the
clip on covers off the cable way.

I took some steps to protect the comms wiring for my automation system
from \"deliberate acts of sabotage\" (e.g., holding a tesla coil to the
8P8c\'s). But, unless you are building from scratch, I don\'t think
it is possible to address this issue (how do you know which wires
travel alongside each other)

About once a decade we get lightning to tree strikes within 100m. It usually
fries bedside clocks and modems (although mine survived OK last time). This was
despite a 1\" calorific spark jumping off it.

Our home was struck many years ago. (CRT) TV ended up \"magnetized\"
(color distortion). Solid state phones were all toasted (leaving the
house looking to be \"off hook\"). My other half (home at the time;
I was at work) in a panic cuz \"the phones don\'t work\" -- it\'s one
thing to lose power, yet another to lose POTS!
 
On 5/8/2025 12:51 PM, KevinJ93 wrote:
When the coal miner\'s strike power usage reductions were in effect I was
working at Marconi-Elliott in Borehamwood.  We were not allowed to have the
lights or heating on but it was permitted to use test equipment so we would
huddle around our Tektronix 547 scopes to keep warm, they used to put out a lot
of heat.

The only \"utility\" that I can recall being VOLUNTARILY rationed was water,
back east, during a period of drought. We were \"strongly discouragd\"
from watering lawns, washing cars (car washes are far more efficient
at this as they recycle the water), etc.

Here, of course (desert southwest), peer pressure and threats of fines
tend to keep folks inline.

The idea of using a garden hose to \"sweep\" debris off your
driveway or sidewalk would be met with a gasp and a glare.
 
I don\'t think I\'ve ever (regardless of where I\'ve lived) experienced
a deliberate power cut.  A drunk may take out a telephone pole or
a branch may fall on some high tension wires but no one has ever
said \"sorry, we\'re turning the lights out\" (for whatever reason)

Not the same, but past Monday someone stole the signalling cable in the high
speed railway to Andalussia, leaving the entire line OOS. I heard that trains
were authorized to run at 40 Km/h, so that they could see the other train in
time and tail it. Not sure it worked.

The authorities talked of sabotage. The price of the cable when new is not even
a thousand euros, but the damage to thousands of people is huge.

There are places where copper products (wire, plumbing) are stolen
for their \"recycle value\". The solution, so far, has been to
require recyclers to get and record identification of people
bringing in such items.

A friend had the copper stripped from the roof-mounted cooling unit at
his business. Landlord held *him* responsible for its repair/replacement.

I think there have been cases of people trying to steal the wiring in
outside lighting systems -- and not taking adequate provisions to
protect against electrocution!

I would like to make some backlit copper lighted displays for the house
(AZ is The Copper State) but am afraid its oxidized color would attract
some thief eager to make a few dollars off it.

Yup.  They have a rationalization, though -- they are trying to provide the
highest availability.  Else, how much availability do you sacrifice to
maximize battery life?  Do you then start specifying battery life as a
primary selection criteria?

[Most SOHO users buy a UPS -- thinking they are being \"professional\" -- and
then discard it when the battery needs replacing and they discover the
costs charged by the UPS manufacturer -- or local \"battery stores\"]

25€. A 9Ah item, high discharge rate.

Different grades exist, here. If you buy from an electronics supplier
(e.g., Digikey), you will likely get a \"fairer\" price (value for money)
than a local battery store (which may be 50% higher). UPS manufacturers
typically charge about double what a reasonable price might be (though
the usually assemble the batteries into the requisite \"packs\"...
a trivial exercise for even 48V units).

Digikey used to have a policy of free shipping for prepaid (cash)
orders. I would buy batteries in lots of 10 and send prepayment.
Shipping charges can be a significant fraction of a battery\'s
cost. They now exclude batteries from this policy (when I last
checked).

I suspect the problem (rationalized by the manufacturers) is trying to
bring the battery back to full charge ASAP -- as well as keeping the
highest state of charge that the battery can support.

Which taken to extremes is very bad for battery life.

Of course.  But, they are in the PRIMARY business of selling batteries,
not UPSs!

Ugh.

And having disgruntled customers.

Think about it. If the *UPS* (hardware) failed at 3 year intervals,
no one would buy them! They\'d be seen as poor quality.

But, no one is surprised that BATTERIES need replacement!

With the exception of multi-user servers, individual workstations
usually have auto-backup provisions *in* the key applications.
And, in the event of an outage (even if the machine stays up),
the user is usually distracted by the rest of the house/office
going black; is ~15 minutes of uptime going to be enough if the
user isn\'t AT the machine when power fails?

You need software monitoring to hibernate or power off the machine.

I have every workstation set to hibernate after ~20 minutes
of inactivity. This gives me time to get a cup of tea, go to
the bathroom, answer the door/phone, etc. without the workstation
cycling off and on.

As \"activity\" is defined by user interactions, this means I
have to deliberately start an application that disables \"sleep\"
if I won\'t be interacting with the machine and want to prevent
it from sleeping. E.g., an SSH session with a remote host that
will be busy for a while; if the workstation sleeps, the SSH
session terminates and the shell on the remote is killed off.

<frown>

No one has yet to address the market where TCO is the driving
criteria.
 
That is a feature of UPS design that specsmanship to get the longest run time
for the sales datasheet means that they cook their batteries. I have seen
them swell to the point of bursting inside a UPS. Thick rubber gloves needed
to remove the remains. Support metalwork was a real corroded rusty mess but
electronics above it remained OK.

That level of \"not working\" has not happened to me. Maybe because some power
failure makes me find out that the battery is dead.

I\'ve rescued a fair number of UPSs over the years. In probably 80% of
them, the batteries have swollen to the point where removing the battery
or battery PACK is difficult. This is especially true of the \"better\"
UPSs (sine output, 48V battery, metal fabrication) where there is
little \"give\" in the mechanical design. Often one has to disassemble
the UPS to see where one can gain leverage on the battery pack
to force it from the case.

They really think I\'m going to buy their vastly overpriced replacements?

I don\'t.

But last battery I replaced was not even two years old, rather 5 months short.
I replaced it just in time to serve during the Gran Apagón.

That\'s the problem; you don\'t KNOW how long a particular battery will last,
even in an environment where it is never called on for backup!

Instead, you are forced into a \"reactive\" mode -- waiting for something
to tell you you\'re screwed and need a replacement, now!

My largest UPS uses 50 pound batteries (8 of them). It\'s
REALLY inconvenient to have to replace them *now* cuz they
are costly and physically inconvenient to man-handle. I
would much appreciate some advance notice that they are likely
to need replacement in, say, 30 days (given the current usage
pattern).

Maybe folks will start putting more smarts into their product
designs instead of simple \"threshold\" events.
 
My mobile phone worked all the day, I could send and receive whatsapp messages.

Are those processed \"locally\"?

No. I don\'t know if they have centralized server or distributed.

Some have \"content distribution networks\".

I have a small computer doing server things, and it tried to email me as
soon as the UPS said it was running on battery. That email did not reach me
till the power came back; this could be that the fibre went OOS, or that the
UPS at my router went down instantly. I do not know.

Doesn\'t your UPS deliver log messages (to a syslog server or data
dumps to an FTP service)?

The one on the server did, yes, but the one on the router doesn\'t have that
facility.

I used to think syslogd support was just another gimmick. But,
I\'ve come to appreciate being able to find ALL of the logs
on ONE server (that is always up). Hard to examine a log on
a device that won\'t boot, etc.

I have each of mine configured to give me summaries of power consumption
and line conditions each minute.  And, use a syslogd on that same server.

I don\'t think any of mine can report power usage.

IIRC, they report:
Date/Time
Vmin/Vmax (input)
Vout/Iout
%Wout/%VAout/%capacity
Frequency
Vbat
Internal temperature
\"external\" temperature & humidity (intended for use in a server room)

Charging at a slower rate and to a lower float voltage would
compromise the UPS\'s availability -- but provide less maintenance costs
(of course, the manufacturer wants to sell you batteries, so you
can see where their priorities will lie!)

Indeed.

I saw in an Eaton model they mentioned two strategies - translated from Spanish:

UPS Topology: Standby (Offline) or Standby (Offline)

Eaton Ellipse ECO 650 IEC SAI Offline 650VA 400W
Eaton P/N: EL650IEC

I have a couple of eatons in the garage. I didn\'t like them for
use in the office as their fans (run continuously) are louder
than I would like (and I have no desire to go tweaking fans)
 
On 5/8/2025 7:18 AM, Martin Brown wrote:
On 08/05/2025 14:41, Carlos E.R. wrote:
On 2025-05-08 14:44, Carlos E.R. wrote:

UPS Topology: Standby (Offline) or Standby (Offline)

{Phrase translated by DeepL, so inconsistent: Topología UPS: En espera (Fuera
de línea) o Standby (Offline)}

There are (at least) two major UPS topologies in play.

One is where the power to the protected device is always made by the inverter
and maintained at the correct main voltage irrespective of input voltage to the
UPS. Useful in places where the local mains supply voltage goes up and down a
lot depending on load.

These are usually called \"double conversion\" (some call them \"online\"
but that can be misleading).

Most of these (that I\'ve encountered) are less efficient (cuz they
are always in-the-loop) and often won\'t START without a functioning battery.

The other is a pass through of input mains voltage to the load under normal
conditions and an isolation relay plus cold start of the inverter within a
couple of cycles of the supply failure. This is more than good enough for PCs.
Mine can withstand a 1s blackout unprotected without any difficulty but kitchen
white goods clocks cannot.

These often can do some line voltage adjusting with an autotransformer
\"for free\" (part of the design).

They, also, are available in models that can be started only with a
valid battery or not. Some require mains voltage to be present, as well.

There are also cheaper units that use \"stepped\" waveforms to approximate
a sine wave; others that are more religious in their determination.

I do not see a reference to that \"topology\" except at the vendor. But it says
that the expected battery life is 4 years.

Ask for a guarantee on that... :>

[ObTrivia: SWMBO\'s vehicle needed a starting battery replacement
~3 years after purchase (battery life is about that for all vehicles,
here; the heat cooks them). As that was within the ~5 year \"factory
warranty\" period, it was no charge -- so I didn\'t bother to get
involved!

THAT battery, of course, failed 3 years later. But, as it was
considered part of the original vehicle (despite being a replacement),
there was no warranty extended to it.

So, I went to Costco and bought one to avoid the dealer\'s insane
charges!]
 
john larkin wrote:
Making money implies efficiency. And vice versa.

That\'s what the Left fundamentally fails to understand.


--
Defund the Thought Police
 
Martin Brown wrote:
Spain suffered a very spectacular near total loss of its national grid
yesterday taking parts of France and all of Portugal down with it.
This is an unprecedented failure of a supergrid system by cascade
failure.
It seems likely they had got the effect of widespread solar PV has on
load shedding wrong (much like happened in the UK) and so it failed
completely. Two events a second apart delivered the coup de grace.

It looks like they spent a lot more effort simulating climate than they
did simulating the grid system.

--
Defund the Thought Police
 
On 08/05/2025 22:18, Don Y wrote:
On 5/8/2025 12:51 PM, KevinJ93 wrote:
When the coal miner\'s strike power usage reductions were in effect I
was working at Marconi-Elliott in Borehamwood.  We were not allowed to
have the lights or heating on but it was permitted to use test
equipment so we would huddle around our Tektronix 547 scopes to keep
warm, they used to put out a lot of heat.

The only \"utility\" that I can recall being VOLUNTARILY rationed was water,
back east, during a period of drought.  We were \"strongly discouragd\"
from watering lawns, washing cars (car washes are far more efficient
at this as they recycle the water), etc.

We also live on the watershed for that. Just far enough north to be on
the copious Northumbrian water supply (intended for all the now defunct
steelworks) but with sewage outflow going downhill to Yorkshire Water.

It has great advantages - Yorkshire Water has many leaks and not enough
reservoirs so hose pipe bans are almost inevitable every summer. One
particularly bad year they were moving drinking water in tankers from
Northumberland Water to Yorkshire to maintain supply. When it gets
really serious they have had to resort to stand pipes in the street.

Looks like this year will be a bumper year for drought orders as there
hasn\'t been any significant rain here for nearly a month now and we have
have broken record temperatures for May already. Reservoirs in sensitive
areas are at abnormally low levels for this time of year.

https://www.theguardian.com/environment/2025/may/06/england-faces-drought-summer-reservoir-water-levels-dwindle
Here, of course (desert southwest), peer pressure and threats of fines
tend to keep folks inline.

The idea of using a garden hose to \"sweep\" debris off your
driveway or sidewalk would be met with a gasp and a glare.

Fair enough. Where I live the water supply is the huge Kielder reservoir
built to service a once thriving major steel industry on Teesside. Even
if it didn\'t rain at all for a year we would still be on supply.

Next village is on Yorkshire and often get hosepipe bans in summer.


--
Martin Brown
 

Welcome to EDABoard.com

Sponsor

Back
Top