D
Don Y
Guest
I\'m designing an AI to manage resources without the need
for a sysadmin. So, have been canvasing colleagues as to
how they \"manually\" manage the resources on their
workstations (as most appliances have static requirements
and, thus, don\'t usually need such management).
In terms of a workstation, resources are usually space and
time -- storage space (short term and secondary) and
how long you, the user, are willing to wait for some or
all of the currently executing activities to complete.
There seem to be two primary strategies:
- let the machine work on ONE thing exclusively to
ensure it has all of the resources that it may need
- let the machine work on everything of interest and
walk away for a while (i.e., remove the time constraint
on results)
For example, when doing SfM, I let the machine work exclusively
on that task as I know it will greedily consume all the
resources it can get its hands on.
OTOH, when recoding video, I\'ll enqueue everything and let
the machine chew on them all while I find some other activity
to assume my focus.
But, both of these are front-loaded decisions; you
implement your strategy up front. They assume the
machine doesn\'t have its own agenda to implement
(i.e., you assume any periodic/scheduled tasks are
insignificant in terms of their resource impact
as you likely are unaware of WHEN they are triggered)
It also assumes you can estimate the magnitude of a
workload /a priori/ -- usually from past experience.
I.e., you KNOW that a certain activity will be a resource
hog -- or not. \"Surprises\" brought about by unexpected
differences in task complexity are rare.
What happens when something you\'ve set out to do takes
considerably longer than you expected? Do you kill off
less important tasks? Renice them? Or, just grin and
bear it?
How do you choose which activities to kill off/defer?
Is this a static decision or an assessment that is
made dynamically based on your perceived \"need\" of
certain work results?
for a sysadmin. So, have been canvasing colleagues as to
how they \"manually\" manage the resources on their
workstations (as most appliances have static requirements
and, thus, don\'t usually need such management).
In terms of a workstation, resources are usually space and
time -- storage space (short term and secondary) and
how long you, the user, are willing to wait for some or
all of the currently executing activities to complete.
There seem to be two primary strategies:
- let the machine work on ONE thing exclusively to
ensure it has all of the resources that it may need
- let the machine work on everything of interest and
walk away for a while (i.e., remove the time constraint
on results)
For example, when doing SfM, I let the machine work exclusively
on that task as I know it will greedily consume all the
resources it can get its hands on.
OTOH, when recoding video, I\'ll enqueue everything and let
the machine chew on them all while I find some other activity
to assume my focus.
But, both of these are front-loaded decisions; you
implement your strategy up front. They assume the
machine doesn\'t have its own agenda to implement
(i.e., you assume any periodic/scheduled tasks are
insignificant in terms of their resource impact
as you likely are unaware of WHEN they are triggered)
It also assumes you can estimate the magnitude of a
workload /a priori/ -- usually from past experience.
I.e., you KNOW that a certain activity will be a resource
hog -- or not. \"Surprises\" brought about by unexpected
differences in task complexity are rare.
What happens when something you\'ve set out to do takes
considerably longer than you expected? Do you kill off
less important tasks? Renice them? Or, just grin and
bear it?
How do you choose which activities to kill off/defer?
Is this a static decision or an assessment that is
made dynamically based on your perceived \"need\" of
certain work results?