Restoring defaults...

D

Don Y

Guest
I use an RDBMS as my *sole* persistent store.

As with any \"component\", I expect it to provide the capabilities
that it was designed to deliver (e.g., ACID).

[This is a key assumption, thus should be examined carefully]

As such, it frees clients from having to check input data
(if from the DBMS) as the constraints and triggers on the
DBMS ensure that the data is well-behaved without need for
additional (and superfluous) checks.

But, what if the DBMS fails (like any component can)?
Should \"default\" values replace the lost ones? (What if
there are no realistic defaults?)
Should operations consistently return FAIL?
Should asynchronous signals be delivered so the client
can be *prepared* for the eventual failed access?
Other?

[Add to the problem the fact that the application is 24/7/365
and UNATTENDED!]

Or, should the system just panic()?
 
On 12/08/2023 07:28, Don Y wrote:
I use an RDBMS as my *sole* persistent store.

As with any \"component\", I expect it to provide the capabilities
that it was designed to deliver (e.g., ACID).

[This is a key assumption, thus should be examined carefully]

As such, it frees clients from having to check input data
(if from the DBMS) as the constraints and triggers on the
DBMS ensure that the data is well-behaved without need for
additional (and superfluous) checks.

But, what if the DBMS fails (like any component can)?
Should \"default\" values replace the lost ones?  (What if
there are no realistic defaults?)

My suggestion would be to return a code for \"Not known\" as a default
when the database itself has gone corrupt. Then arrange that the user
code recognises the database has gone awry. A bit like NaN in FP code.

Some defaults are helpful on data entry but they can also open up user
mistakes - I didn\'t know that was the default...

> Should operations consistently return FAIL?

That would be my preference so that the next stage doesn\'t continue to
process duff data as if it was real.

Should asynchronous signals be delivered so the client
can be *prepared* for the eventual failed access?

Depends how long the search might take.
Other?

[Add to the problem the fact that the application is 24/7/365
and UNATTENDED!]

Or, should the system just panic()?

It depends whether or not it can still do useful work or might be
increasing the damage to your database by pressing on regardless.

--
Martin Brown
 
On 8/12/2023 1:05 AM, Martin Brown wrote:
On 12/08/2023 07:28, Don Y wrote:
I use an RDBMS as my *sole* persistent store.

As with any \"component\", I expect it to provide the capabilities
that it was designed to deliver (e.g., ACID).

[This is a key assumption, thus should be examined carefully]

As such, it frees clients from having to check input data
(if from the DBMS) as the constraints and triggers on the
DBMS ensure that the data is well-behaved without need for
additional (and superfluous) checks.

But, what if the DBMS fails (like any component can)?
Should \"default\" values replace the lost ones?  (What if
there are no realistic defaults?)

My suggestion would be to return a code for \"Not known\" as a default when the
database itself has gone corrupt. Then arrange that the user code recognises
the database has gone awry. A bit like NaN in FP code.

Each client implicitly is \"registered\" with the relations that it
will be accessing, when the client is instantiated. This is a side-effect
of my overall architecture -- you need a handle to access an object
and the presence of the handle in your namespace creates the linkage
to the DBMS. Delaying such instantiations means that a later access
to any of those objects may be costly as the server backing the object
will have to be loaded, initialized and the interface configured for
that client.

Once registered, any changes to the objects in which you\'ve expressed
an interest results in an upcall from the DBMS to an event handler in
your code. In this way, other clients can update those objects
without requiring you to keep asking \"has anything changed\"?

But, if the DBMS itself, \"fails\", then every object referenced by every
(active) client -- even if not yet interested in the DBMS\'s content -- will
be notified in a flurry of upcalls (one for each referenced object in
each client).

[Imagine a file system/media failure in a conventional product. Would you
want each potential user of that resource to be individually notified
WHEN IT WAS actually REFERENCED? Or, would you want them to know that
any FUTURE references are going to be problematic?]

[[Or, imagine a .so referenced by a binary but not resolved until an
actual reference is made to one of its components. The binary runs along
and, some time in the future, crashes because the referenced .so doesn\'t
exist! And, EVERY binary that references that .so suffers the same fate!]]

Some defaults are helpful on data entry but they can also open up user mistakes
- I didn\'t know that was the default...

Most defaults make sense. E.g., if you \"lose\" the training data for this
individual\'s gesture recognizer, then the algorithm reverts to the untrained,
\"idealized\" gesture templates. The recognizer \"works\" but not as effectively
as it would have if it had held onto the training data (e.g., it may confuse
\'0\' and \'O\' more often)

Other defaults may \"make sense\" but dramatically impact the system\'s
performance or usability. E.g., \"which voices should be recognized as
authoritative? And, what do they *sound* like?\"

As a DBMS failure\'s effects are pervasive, it seems like that event
should be handled differently.

Should operations consistently return FAIL?

That would be my preference so that the next stage doesn\'t continue to process
duff data as if it was real.

Should asynchronous signals be delivered so the client
can be *prepared* for the eventual failed access?

Depends how long the search might take.

If *not* notified, any reference to the DBMS can return FAIL
in almost the time of a null-RMI. But, the client won\'t
get that notification until he makes a (synchronous) request.

I don\'t want to force the client to prefetch everything of interest
just to be sure he *has* everything that he needs.

[The loader\'s actions in instantiating every referenced object
at least ensures that those proxies are in place and \"live\"
before the client even starts to execute code. But, it doesn\'t
ensure the operations they represent will yield results (because
the operations are effectively indeterminate at load time)]

Other?

[Add to the problem the fact that the application is 24/7/365
and UNATTENDED!]

Or, should the system just panic()?

It depends whether or not it can still do useful work or might be increasing
the damage to your database by pressing on regardless.

In the case I\'m addressing, the DBMS is already toast. The question
is how drastic the consequences to other clients, given that everything
persistent \"was\" there.
 
On 2023-08-12, Don Y <blockedofcourse@foo.invalid> wrote:
I use an RDBMS as my *sole* persistent store.

As with any \"component\", I expect it to provide the capabilities
that it was designed to deliver (e.g., ACID).

[This is a key assumption, thus should be examined carefully]

As such, it frees clients from having to check input data
(if from the DBMS) as the constraints and triggers on the
DBMS ensure that the data is well-behaved without need for
additional (and superfluous) checks.

But, what if the DBMS fails (like any component can)?

I\'m guessing FROM CONTEXT ONLY that you mean a hard failure where it
stuarts returning errors instead of data.

Generally people want a graceful failure, figure out what that means
to your customers.

Should \"default\" values replace the lost ones? (What if
there are no realistic defaults?)

Try to do the right thing, what that is needs more domain knowledge
than I now have.

Should operations consistently return FAIL?
Should asynchronous signals be delivered so the client
can be *prepared* for the eventual failed access?
Other?

[Add to the problem the fact that the application is 24/7/365
and UNATTENDED!]

Or, should the system just panic()?

Not all failures are noisy, some are silent. Assume for instance that
something locks one of the resources for longer than expected.


--
Jasen.
🇺🇦 Слава Україні
 
On 8/12/2023 4:50 AM, Jasen Betts wrote:
On 2023-08-12, Don Y <blockedofcourse@foo.invalid> wrote:
I use an RDBMS as my *sole* persistent store.

As with any \"component\", I expect it to provide the capabilities
that it was designed to deliver (e.g., ACID).

[This is a key assumption, thus should be examined carefully]

As such, it frees clients from having to check input data
(if from the DBMS) as the constraints and triggers on the
DBMS ensure that the data is well-behaved without need for
additional (and superfluous) checks.

But, what if the DBMS fails (like any component can)?

I\'m guessing FROM CONTEXT ONLY that you mean a hard failure where it
stuarts returning errors instead of data.

No. When it falls short wrt ACID.

The easiest sort of problem to imagine is when the data store
gets corrupted (or outright fails). But, it could also be
a software bug where it fails to provide for atomic operations
(e.g., transactions fail) or doesn\'t reliably return the
data previously stored (for whatever reason).

I.e., it has a contractual role as a *component*. When it
stops filling that role, the notion of persistent storage
goes away.

[When a diode opens -- or shorts -- the desired function of
that component is no longer available to your design. How
that impacts your design would depend on where the diode
was located and its overall role in the design.

The DBMS is the *sole* mechanism for persistent storage.
So, if *it* fails, you can\'t rely on the services that
you expected it to provide.]

Generally people want a graceful failure, figure out what that means
to your customers.

That would depend on what was required of the store when
it failed.

E.g., if it were to fail before the system booted, then
all of the binary images would be suspect (or missing)
and the system would just dissipate heat.

If \"most\" of the system was already operational, then
such a failure might prevent new functions from coming
online. Or, setting changes from being remembered.
Or changes to one set of parameters from being
automatically conveyed to other clients referencing
them. Or...

[If your disk drive develops a defect while you are
using a particular application, what effect will
it have on that application? On *other* applications
not yet loaded? How inconvenient will this be to you?
To someone else?]

You\'d likely not be happy if <whatever> became completely
unavailable as a result of a design decision that handled
DBMS failures in such a Draconian manner.

Should \"default\" values replace the lost ones? (What if
there are no realistic defaults?)

Try to do the right thing, what that is needs more domain knowledge
than I now have.

You don\'t always know what the right thing is. If the
HVAC schedule is corrupted by that failure, should I
keep the house *warm*? Or cold? Should I water the
yard (how much?) or wait (how long?)

Should operations consistently return FAIL?
Should asynchronous signals be delivered so the client
can be *prepared* for the eventual failed access?
Other?

[Add to the problem the fact that the application is 24/7/365
and UNATTENDED!]

Or, should the system just panic()?

Not all failures are noisy, some are silent. Assume for instance that
something locks one of the resources for longer than expected.

If something takes too long, then something else will fail to meet
its deadline(s). Barring any additional dependence on the DBMS,
those things will invoke their deadline handlers and respond
accordingly (its ALWAYS possible for something/anything to miss a
deadline so the system has to treat that as an inherent design issue)

Clients (in this scenario) are well-behaved. The issue is
if the DBMS manifests some faulty behavior (/A + /C + /I + /D).
I can install a redundant server but that adds complications. And,
doesn\'t address the failed component. I.e., sooner or later <someone>
is going to have to do <something>. The question is how much you
try to limp along until that can happen.

[If clients misbehave, then other mechanisms can ensure that
they can\'t do harm -- including killing their processes or powering
down their hosts. How do you apply the same strategy to the DBMS?]
 
On Sat, 12 Aug 2023 06:54:04 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 8/12/2023 4:50 AM, Jasen Betts wrote:
On 2023-08-12, Don Y <blockedofcourse@foo.invalid> wrote:
I use an RDBMS as my *sole* persistent store.

As with any \"component\", I expect it to provide the capabilities
that it was designed to deliver (e.g., ACID).

[This is a key assumption, thus should be examined carefully]

As such, it frees clients from having to check input data
(if from the DBMS) as the constraints and triggers on the
DBMS ensure that the data is well-behaved without need for
additional (and superfluous) checks.

But, what if the DBMS fails (like any component can)?

I\'m guessing FROM CONTEXT ONLY that you mean a hard failure where it
stuarts returning errors instead of data.

No. When it falls short wrt ACID.

The easiest sort of problem to imagine is when the data store
gets corrupted (or outright fails). But, it could also be
a software bug where it fails to provide for atomic operations
(e.g., transactions fail) or doesn\'t reliably return the
data previously stored (for whatever reason).

I.e., it has a contractual role as a *component*. When it
stops filling that role, the notion of persistent storage
goes away.

[When a diode opens -- or shorts -- the desired function of
that component is no longer available to your design. How
that impacts your design would depend on where the diode
was located and its overall role in the design.

The DBMS is the *sole* mechanism for persistent storage.
So, if *it* fails, you can\'t rely on the services that
you expected it to provide.]

Generally people want a graceful failure, figure out what that means
to your customers.

That would depend on what was required of the store when
it failed.

E.g., if it were to fail before the system booted, then
all of the binary images would be suspect (or missing)
and the system would just dissipate heat.

If \"most\" of the system was already operational, then
such a failure might prevent new functions from coming
online. Or, setting changes from being remembered.
Or changes to one set of parameters from being
automatically conveyed to other clients referencing
them. Or...

[If your disk drive develops a defect while you are
using a particular application, what effect will
it have on that application? On *other* applications
not yet loaded? How inconvenient will this be to you?
To someone else?]

You\'d likely not be happy if <whatever> became completely
unavailable as a result of a design decision that handled
DBMS failures in such a Draconian manner.

Should \"default\" values replace the lost ones? (What if
there are no realistic defaults?)

Try to do the right thing, what that is needs more domain knowledge
than I now have.

You don\'t always know what the right thing is. If the
HVAC schedule is corrupted by that failure, should I
keep the house *warm*? Or cold? Should I water the
yard (how much?) or wait (how long?)

A RDBMS software bug will make your family freeze and have to move
into a motel? Someone might reasonably include a divorce attorney in
the process.

Is the data in the RDBMS valuable? Is it otherwise backed up?

I\'m so glad I\'m not a programmer. I just use a hose.
 
On 8/12/2023 3:57 AM, Don Y wrote:
Or, should the system just panic()?

It depends whether or not it can still do useful work or might be increasing
the damage to your database by pressing on regardless.

In the case I\'m addressing, the DBMS is already toast.  The question
is how drastic the consequences to other clients, given that everything
persistent \"was\" there.

\"Toast\" may be an overstatement. \"The DBMS\'s proper (contractually
expected) functionality can not be verified\" is a better statement.

[Imagine throwing ECC errors. When do you loose confidence in the
array being able to faithfully maintain its contents? (its actually
an interesting question -- that few people can answer satisfactorily!)]

But, note that the DBMS is more than just a simple file-store
as the data, within, is *typed*, constrained and triggers actions
as it is modified/referenced (in addition to executing stored
procedures).

[Redundant *storage* is trivial to implement. Even tagged data (e.g.,
a simple dictionary with btree index). But, for the other functionality,
you\'d essentially have to replicate the entire DBMS (even if you didn\'t
try to put any of the \"smarts/optimizations\" that the DBMS implements
in place!)]

So, if all you know is that the DBMS is \"suspect\", which of ACID should
you deduce are not being provided? And, what effect those deficiencies
on your home, business, factory floor, etc. and algorithms that control
those applications?
 
On 2023-08-12, Don Y <blockedofcourse@foo.invalid> wrote:
On 8/12/2023 4:50 AM, Jasen Betts wrote:
On 2023-08-12, Don Y <blockedofcourse@foo.invalid> wrote:
I use an RDBMS as my *sole* persistent store.

As with any \"component\", I expect it to provide the capabilities
that it was designed to deliver (e.g., ACID).

[This is a key assumption, thus should be examined carefully]

As such, it frees clients from having to check input data
(if from the DBMS) as the constraints and triggers on the
DBMS ensure that the data is well-behaved without need for
additional (and superfluous) checks.

But, what if the DBMS fails (like any component can)?

I\'m guessing FROM CONTEXT ONLY that you mean a hard failure where it
stuarts returning errors instead of data.

No. When it falls short wrt ACID.

Use a better RDBMS

The easiest sort of problem to imagine is when the data store
gets corrupted (or outright fails). But, it could also be
a software bug where it fails to provide for atomic operations
(e.g., transactions fail) or doesn\'t reliably return the
data previously stored (for whatever reason).

storage failure is going to tend to be hard failures, corruption yeah
that could bew a problem, does your storage media provide any sort of
internal integrity checks.

I.e., it has a contractual role as a *component*. When it
stops filling that role, the notion of persistent storage
goes away.

[When a diode opens -- or shorts -- the desired function of
that component is no longer available to your design. How
that impacts your design would depend on where the diode
was located and its overall role in the design.

The DBMS is the *sole* mechanism for persistent storage.
So, if *it* fails, you can\'t rely on the services that
you expected it to provide.]

Generally people want a graceful failure, figure out what that means
to your customers.

That would depend on what was required of the store when
it failed.

E.g., if it were to fail before the system booted, then
all of the binary images would be suspect (or missing)
and the system would just dissipate heat.

If \"most\" of the system was already operational, then
such a failure might prevent new functions from coming
online. Or, setting changes from being remembered.
Or changes to one set of parameters from being
automatically conveyed to other clients referencing
them. Or...

[If your disk drive develops a defect while you are
using a particular application, what effect will
it have on that application? On *other* applications
not yet loaded? How inconvenient will this be to you?
To someone else?]

For example: Postgresql tends to slow down and use more memory
until the disk comes back online. Queries requsting new (uncached)
data may hang, updates will sit in RAM until storafe comes back online
and they can be written out. If you loose power it will resume in the
last consistent state.

You\'d likely not be happy if <whatever> became completely
unavailable as a result of a design decision that handled
DBMS failures in such a Draconian manner.

All breakages are annoying. if it\'s an important part maybe
have a spare. for storage this could be mirroring.

Should \"default\" values replace the lost ones? (What if
there are no realistic defaults?)

Try to do the right thing, what that is needs more domain knowledge
than I now have.

You don\'t always know what the right thing is.

I don\'t even know what the question is, in light of that failure to
communicate you get a best attempt at \"the right thing\"

If the HVAC schedule is corrupted by that failure, should I
keep the house *warm*? Or cold? Should I water the
yard (how much?) or wait (how long?)

current settings are probably the best you can do if you can\'t access
future settings.

Should operations consistently return FAIL?
Should asynchronous signals be delivered so the client
can be *prepared* for the eventual failed access?
Other?

[Add to the problem the fact that the application is 24/7/365
and UNATTENDED!]

Or, should the system just panic()?

Not all failures are noisy, some are silent. Assume for instance that
something locks one of the resources for longer than expected.

If something takes too long, then something else will fail to meet
its deadline(s). Barring any additional dependence on the DBMS,
those things will invoke their deadline handlers and respond
accordingly (its ALWAYS possible for something/anything to miss a
deadline so the system has to treat that as an inherent design issue)

Clients (in this scenario) are well-behaved. The issue is
if the DBMS manifests some faulty behavior (/A + /C + /I + /D).
I can install a redundant server but that adds complications. And,
doesn\'t address the failed component. I.e., sooner or later <someone
is going to have to do <something>. The question is how much you
try to limp along until that can happen.

How soon? perhaps divide how much it costs by how much it matters?

[If clients misbehave, then other mechanisms can ensure that
they can\'t do harm -- including killing their processes or powering
down their hosts. How do you apply the same strategy to the DBMS?]

In my experience Postgresql is extremely reliable, especially if the
system (CPU, RAM, OS, etc) and storage are reliable. Don\'t worry about
ACID, other things are much more breaky.

--
Jasen.
🇺🇦 Слава Україні
 
On 8/14/2023 4:12 AM, Jasen Betts wrote:
As such, it frees clients from having to check input data
(if from the DBMS) as the constraints and triggers on the
DBMS ensure that the data is well-behaved without need for
additional (and superfluous) checks.

But, what if the DBMS fails (like any component can)?

I\'m guessing FROM CONTEXT ONLY that you mean a hard failure where it
stuarts returning errors instead of data.

No. When it falls short wrt ACID.

Use a better RDBMS

There are only a few FOSS RDBMSs (that I can port to different
platforms). And, I have no desire to become knowledgeable
enough in the technology to roll-my-own.

The easiest sort of problem to imagine is when the data store
gets corrupted (or outright fails). But, it could also be
a software bug where it fails to provide for atomic operations
(e.g., transactions fail) or doesn\'t reliably return the
data previously stored (for whatever reason).

storage failure is going to tend to be hard failures, corruption yeah
that could bew a problem, does your storage media provide any sort of
internal integrity checks.

Memory is continually verified, tested and scrubbed. But, all you can do,
there, is indicate that something HAS ALREADY erred. Different tablespaces
for different access/persistence styles.

Generally people want a graceful failure, figure out what that means
to your customers.

That would depend on what was required of the store when
it failed.

E.g., if it were to fail before the system booted, then
all of the binary images would be suspect (or missing)
and the system would just dissipate heat.

If \"most\" of the system was already operational, then
such a failure might prevent new functions from coming
online. Or, setting changes from being remembered.
Or changes to one set of parameters from being
automatically conveyed to other clients referencing
them. Or...

[If your disk drive develops a defect while you are
using a particular application, what effect will
it have on that application? On *other* applications
not yet loaded? How inconvenient will this be to you?
To someone else?]

For example: Postgresql tends to slow down and use more memory
until the disk comes back online. Queries requsting new (uncached)
data may hang, updates will sit in RAM until storafe comes back online
and they can be written out. If you loose power it will resume in the
last consistent state.

The service stops providing as contracted. Any objects backed by it
are effectively dead. If you are also using it as a whiteboard
(of sorts), then all of those implied interconnects are also at risk.

You\'d likely not be happy if <whatever> became completely
unavailable as a result of a design decision that handled
DBMS failures in such a Draconian manner.

All breakages are annoying. if it\'s an important part maybe
have a spare. for storage this could be mirroring.

I already do that. But, there is always the possibility of a
component failing -- or being *defective*. If you look
through bug reports and open issues, there\'s an obvious recognition
that nothing is perfect.

Should \"default\" values replace the lost ones? (What if
there are no realistic defaults?)

Try to do the right thing, what that is needs more domain knowledge
than I now have.

You don\'t always know what the right thing is.

I don\'t even know what the question is, in light of that failure to
communicate you get a best attempt at \"the right thing\"

Each developer has to figure out what the \"right thing\" is for
his particular schema. But, he likely can only do that is he
knows what other objects may be compromised or unavailable.

Hence the idea of signaling a failure in the underlying service
instead of letting each object server throw an error.

If the HVAC schedule is corrupted by that failure, should I
keep the house *warm*? Or cold? Should I water the
yard (how much?) or wait (how long?)

current settings are probably the best you can do if you can\'t access
future settings.

But you may not have \"current settings\".

There is no backing store for virtual memory. So, anything that wants
to \"sit and wait\" consumes resources.

Do you really expect to check the irrigation criteria continuously?
Or, perhaps, only at some low polling frequency (that *you* may have
established based on your most recent examination of the criteria
and available precipitation)?

So, you design tasks to run-to-completion and expect them to be
reinvoked periodically (reloaded from their *persistent* images;
why store their current image AND their original image? just
reload the original and work from that)

If a process is (effectively) sleeping, then it doesn\'t exist
in memory. Nor does it\'s data. It will be reloaded when the time
is right (from the persistent store) and *it* will reload the data
that it considered important enough to preserve.

If the RDBMS is faulty, then it can\'t do these things.

Should operations consistently return FAIL?
Should asynchronous signals be delivered so the client
can be *prepared* for the eventual failed access?
Other?

[Add to the problem the fact that the application is 24/7/365
and UNATTENDED!]

Or, should the system just panic()?

Not all failures are noisy, some are silent. Assume for instance that
something locks one of the resources for longer than expected.

If something takes too long, then something else will fail to meet
its deadline(s). Barring any additional dependence on the DBMS,
those things will invoke their deadline handlers and respond
accordingly (its ALWAYS possible for something/anything to miss a
deadline so the system has to treat that as an inherent design issue)

Clients (in this scenario) are well-behaved. The issue is
if the DBMS manifests some faulty behavior (/A + /C + /I + /D).
I can install a redundant server but that adds complications. And,
doesn\'t address the failed component. I.e., sooner or later <someone
is going to have to do <something>. The question is how much you
try to limp along until that can happen.

How soon? perhaps divide how much it costs by how much it matters?

Who decides how much it matters? A homeowner may be tolerant *or*
intolerant -- and of some shortfalls more/less than others.

A retail outlet -- or other commercial enterprise -- likely considerably
less so.

How long will a homeowner wait for his HVAC to be repaired? What
about his PC? Automobile? Each item has different \"thresholds of pain\"
based on their importance to the user, AT THAT TIME!

[E.g., I am far more tolerant of someone repairing my ACbrrr in January
than in July! And, if I have important appointments scheduled, I\'m
more eager to get the new tires for it installed NOW instead of later]

[If clients misbehave, then other mechanisms can ensure that
they can\'t do harm -- including killing their processes or powering
down their hosts. How do you apply the same strategy to the DBMS?]

In my experience Postgresql is extremely reliable, especially if the
system (CPU, RAM, OS, etc) and storage are reliable. Don\'t worry about
ACID, other things are much more breaky.

EVERYTHING breaks. My (home) PostgreSQL server has been up for
hundreds of days. But, it is only lightly used, never runs out
of resources and the schema implemented are simple -- even if
queries may be costly.

How long it would stay up -- on that same hardware -- with other
\"foreign\" applications running is an unknown.

I think I have to redefine my object hierarchy and make the RDBMS more
visible in it -- much like physical RAM is visible to the servers
that use it. That way, it can throw an exception directly instead
of *through* the objects that it backs.

Then, the clients can decide how to deal with the problem instead
of unilaterally taking a particular action.
 

Welcome to EDABoard.com

Sponsor

Back
Top