Internet Speed Tests...

R

Rick C

Guest
I\'ve always wondered how good Internet speed tests really are. They seem to have a tendency to have a significant delay to getting started, then ramp up in speed for a few seconds before reaching a max speed for a while before ending. If you are downloading a large file or streaming, I suppose that\'s a reasonable test. But if you are hitting web pages, it doesn\'t make sense to measure the response time by transferring one large file.

I\'ve never seen a web site for measuring time to load web pages. Is there anything like that? I wonder what it would take to get an ISP to pay attention to it? I remember dealing with Comcast about slow speeds and they would only use numbers from their own server that was local to the network I was on. So it wasn\'t measuring the speed I actually got from the Internet, just their local speeds!

Internet access is pretty poor in Puerto Rico compared to Virginia. It\'s not slow speeds, it\'s irregular service. It can be up for a day, then spotty for a day, then out for an hour or two. A few of the places I\'ve stayed had rolling IP addresses. Some websites that require login would immediately kick me out saying my IP address had changed. This would be continuous as if the IP was changing every second! But it didn\'t happen with every site I logged into. What was that about?

--

Rick C.

- Get 1,000 miles of free Supercharging
- Tesla referral code - https://ts.la/richard11209
 
On 1/29/2022 9:46 PM, Rick C wrote:
I\'ve always wondered how good Internet speed tests really are. They seem to have a tendency to have a significant delay to getting started, then ramp up in speed for a few seconds before reaching a max speed for a while before ending. If you are downloading a large file or streaming, I suppose that\'s a reasonable test. But if you are hitting web pages, it doesn\'t make sense to measure the response time by transferring one large file.

I\'ve never seen a web site for measuring time to load web pages. Is there anything like that? I wonder what it would take to get an ISP to pay attention to it? I remember dealing with Comcast about slow speeds and they would only use numbers from their own server that was local to the network I was on. So it wasn\'t measuring the speed I actually got from the Internet, just their local speeds!

Internet access is pretty poor in Puerto Rico compared to Virginia. It\'s not slow speeds, it\'s irregular service. It can be up for a day, then spotty for a day, then out for an hour or two. A few of the places I\'ve stayed had rolling IP addresses. Some websites that require login would immediately kick me out saying my IP address had changed. This would be continuous as if the IP was changing every second! But it didn\'t happen with every site I logged into. What was that about?

Ya you can use the included developer tools in most browsers for many
statistics, including DNS lookup and download times, e.g.

<https://www.a2hosting.com/kb/installable-applications/optimization-and-configuration/measuring-website-performance-using-google-chrome-developer-tools>
 
On a sunny day (Sat, 29 Jan 2022 18:46:55 -0800 (PST)) it happened Rick C
<gnuarm.deletethisbit@gmail.com> wrote in
<3a7b6b65-36b8-4393-963a-0f6ef54cb9ben@googlegroups.com>:

I\'ve always wondered how good Internet speed tests really are. They seem to
have a tendency to have a significant delay to getting started, then ramp
up in speed for a few seconds before reaching a max speed for a while before
ending. If you are downloading a large file or streaming, I suppose that\'s
a reasonable test. But if you are hitting web pages, it doesn\'t make sense
to measure the response time by transferring one large file.

I\'ve never seen a web site for measuring time to load web pages. Is there anything
like that? I wonder what it would take to get an ISP to pay attention
to it? I remember dealing with Comcast about slow speeds and they would
only use numbers from their own server that was local to the network I was
on. So it wasn\'t measuring the speed I actually got from the Internet, just
their local speeds!

Internet access is pretty poor in Puerto Rico compared to Virginia. It\'s not
slow speeds, it\'s irregular service. It can be up for a day, then spotty
for a day, then out for an hour or two. A few of the places I\'ve stayed had
rolling IP addresses. Some websites that require login would immediately
kick me out saying my IP address had changed. This would be continuous as
if the IP was changing every second! But it didn\'t happen with every site
I logged into. What was that about?

If I want to know if my internet connection is normal speed I simply type:
ping 8.8.8.8
in a Linux terminal.
That is the google nameserver
~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=1 ttl=57 time=34.9 ms

Any slower and my 4G is not working,,,


You can also download a file with \"wget\" and it will show the speed for that site:

~# wget http://panteltje.com/index.html
--2022-01-30 07:26:51-- http://panteltje.com/index.html
Resolving panteltje.com (panteltje.com)... 92.205.4.14
Connecting to panteltje.com (panteltje.com)|92.205.4.14|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2855 (2.8K) [text/html]
Saving to: \'index.html\'

100%[======================================================================================================================================================================================>] 2,855 --.-K/s in 0s

2022-01-30 07:26:52 (71.6 MB/s) - \'index.html\' saved [2855/2855]

-------------------^^^^^^^^^^^^

Better use a longer file else the MB/s may not be real (cached somewhere perhaps).

There are websites that will show you your own IP address, google for it.

If all else fails maybe buy a SpaceX satellite terminal?
 
On 1/30/2022 4:46, Rick C wrote:
I\'ve always wondered how good Internet speed tests really are. They seem to have a tendency to have a significant delay to getting started, then ramp up in speed for a few seconds before reaching a max speed for a while before ending. If you are downloading a large file or streaming, I suppose that\'s a reasonable test. But if you are hitting web pages, it doesn\'t make sense to measure the response time by transferring one large file.

I\'ve never seen a web site for measuring time to load web pages. Is there anything like that? I wonder what it would take to get an ISP to pay attention to it? I remember dealing with Comcast about slow speeds and they would only use numbers from their own server that was local to the network I was on. So it wasn\'t measuring the speed I actually got from the Internet, just their local speeds!

Internet access is pretty poor in Puerto Rico compared to Virginia. It\'s not slow speeds, it\'s irregular service. It can be up for a day, then spotty for a day, then out for an hour or two. A few of the places I\'ve stayed had rolling IP addresses. Some websites that require login would immediately kick me out saying my IP address had changed. This would be continuous as if the IP was changing every second! But it didn\'t happen with every site I logged into. What was that about?

It is impractical to make such a reliable test. Much of its results
can depend on the user testing it, like which dns server they use;
a typical website accesses a lot of various IP addresses.
Most likely what is annoying you are the ad related accesses,
if some google or whatever central thing is slow you are made
to wait for it - sometimes for a very long time. Then some
addresses (IP addresses) are dynamically moved from one location
to another, it is quite a mess really.
The tests available you are aware of do what is practical to do;
you can see how fast down and upload work, how they reach full speed
(for tcp it is mandatory to have a slow start transmitting, not
necessarily slow enough to be easily noticeable though) and that
is it. You can change test server location, various destinations
are routed differently and some are slower than others.
 
On Sun, 30 Jan 2022 06:41:41 GMT, Jan Panteltje
<pNaonStpealmtje@yahoo.com> wrote:

On a sunny day (Sat, 29 Jan 2022 18:46:55 -0800 (PST)) it happened Rick C
gnuarm.deletethisbit@gmail.com> wrote in
3a7b6b65-36b8-4393-963a-0f6ef54cb9ben@googlegroups.com>:

I\'ve always wondered how good Internet speed tests really are. They seem to
have a tendency to have a significant delay to getting started, then ramp
up in speed for a few seconds before reaching a max speed for a while before
ending. If you are downloading a large file or streaming, I suppose that\'s
a reasonable test. But if you are hitting web pages, it doesn\'t make sense
to measure the response time by transferring one large file.

I\'ve never seen a web site for measuring time to load web pages. Is there anything
like that? I wonder what it would take to get an ISP to pay attention
to it? I remember dealing with Comcast about slow speeds and they would
only use numbers from their own server that was local to the network I was
on. So it wasn\'t measuring the speed I actually got from the Internet, just
their local speeds!

Internet access is pretty poor in Puerto Rico compared to Virginia. It\'s not
slow speeds, it\'s irregular service. It can be up for a day, then spotty
for a day, then out for an hour or two. A few of the places I\'ve stayed had
rolling IP addresses. Some websites that require login would immediately
kick me out saying my IP address had changed. This would be continuous as
if the IP was changing every second! But it didn\'t happen with every site
I logged into. What was that about?

If I want to know if my internet connection is normal speed I simply type:
ping 8.8.8.8
in a Linux terminal.
That is the google nameserver
~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=1 ttl=57 time=34.9 ms

Any slower and my 4G is not working,,,


You can also download a file with \"wget\" and it will show the speed for that site:

~# wget http://panteltje.com/index.html
--2022-01-30 07:26:51-- http://panteltje.com/index.html
Resolving panteltje.com (panteltje.com)... 92.205.4.14
Connecting to panteltje.com (panteltje.com)|92.205.4.14|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2855 (2.8K) [text/html]
Saving to: \'index.html\'

100%[======================================================================================================================================================================================>] 2,855 --.-K/s in 0s

2022-01-30 07:26:52 (71.6 MB/s) - \'index.html\' saved [2855/2855]

-------------------^^^^^^^^^^^^

Better use a longer file else the MB/s may not be real (cached somewhere perhaps).

There are websites that will show you your own IP address, google for it.

If all else fails maybe buy a SpaceX satellite terminal?

Most browser-based speed tests report ping or latency time. A specific
web site could be anything over that.

The M-Lab test (the google default) shows me 135/40/7ms, Comcast
cable.



--

I yam what I yam - Popeye
 
On Sunday, January 30, 2022 at 8:49:27 AM UTC-5, Dimiter Popoff wrote:
On 1/30/2022 4:46, Rick C wrote:
I\'ve always wondered how good Internet speed tests really are. They seem to have a tendency to have a significant delay to getting started, then ramp up in speed for a few seconds before reaching a max speed for a while before ending. If you are downloading a large file or streaming, I suppose that\'s a reasonable test. But if you are hitting web pages, it doesn\'t make sense to measure the response time by transferring one large file.

I\'ve never seen a web site for measuring time to load web pages. Is there anything like that? I wonder what it would take to get an ISP to pay attention to it? I remember dealing with Comcast about slow speeds and they would only use numbers from their own server that was local to the network I was on. So it wasn\'t measuring the speed I actually got from the Internet, just their local speeds!

Internet access is pretty poor in Puerto Rico compared to Virginia. It\'s not slow speeds, it\'s irregular service. It can be up for a day, then spotty for a day, then out for an hour or two. A few of the places I\'ve stayed had rolling IP addresses. Some websites that require login would immediately kick me out saying my IP address had changed. This would be continuous as if the IP was changing every second! But it didn\'t happen with every site I logged into. What was that about?

It is impractical to make such a reliable test. Much of its results
can depend on the user testing it, like which dns server they use;
a typical website accesses a lot of various IP addresses.
Most likely what is annoying you are the ad related accesses,
if some google or whatever central thing is slow you are made
to wait for it - sometimes for a very long time. Then some
addresses (IP addresses) are dynamically moved from one location
to another, it is quite a mess really.
The tests available you are aware of do what is practical to do;
you can see how fast down and upload work, how they reach full speed
(for tcp it is mandatory to have a slow start transmitting, not
necessarily slow enough to be easily noticeable though) and that
is it. You can change test server location, various destinations
are routed differently and some are slower than others.

I guess my point is timing the download of a large file is not a good measure of much of anything. Web sites consist of many files, often none of them large. The time it takes for a web page to be viewable depends on the time it takes for the entire protocol of loading the HTML, requesting the various files specified there, then loading the various files specified there, and so on. The observed delays in the initial display in the browser of web pages is often much, much longer than any transmission time of the files in a page. I see large variations between different Internet providers, so it\'s not my PC or browser.

I did fix an issue I periodically have in Virginia. Seems my router has developed some problems. I replaced it and am getting much better results now. Ping times in the teens.

--

Rick C.

+ Get 1,000 miles of free Supercharging
+ Tesla referral code - https://ts.la/richard11209
 
On Sunday, 30 January 2022 at 17:24:29 UTC, gnuarm.del...@gmail.com wrote:

I guess my point is timing the download of a large file is not a good measure of much of anything. Web sites consist of many files, often none of them large. The time it takes for a web page to be viewable depends on the time it takes for the entire protocol of loading the HTML, requesting the various files specified there, then loading the various files specified there, and so on. The observed delays in the initial display in the browser of web pages is often much, much longer than any transmission time of the files in a page. I see large variations between different Internet providers, so it\'s not my PC or browser.
Each of those those various files will also have an associated domain name lookup, so changing to
a domain name server that responds more quickly can make a big difference.

Another factor which can sometimes make websites seem very slow is that
occasionally IPv6 connectivity is broken. Most browsers nowadays will use the IPv6 address
first and if it doesn\'t respond within a certain time will then try IPv4. If the default gateway
is announcing an IPv6 route which doesn\'t work then everything gets very slow.

John
 
On 30/01/2022 02:46, Rick C wrote:
I\'ve always wondered how good Internet speed tests really are. They
seem to have a tendency to have a significant delay to getting
started, then ramp up in speed for a few seconds before reaching a
max speed for a while before ending. If you are downloading a large
file or streaming, I suppose that\'s a reasonable test. But if you
are hitting web pages, it doesn\'t make sense to measure the response
time by transferring one large file.

It is still not a bad proxy for internet speed.

The other one is ping time to the server which gives you a good idea of
the round trip time for the minimal short message.

Often for web pages your delays are due to a dodgy slow DNS lookup or
some turgid lethargic script on an overloaded web server.

I\'ve never seen a web site for measuring time to load web pages. Is
there anything like that? I wonder what it would take to get an ISP
to pay attention to it? I remember dealing with Comcast about slow
speeds and they would only use numbers from their own server that was
local to the network I was on. So it wasn\'t measuring the speed I
actually got from the Internet, just their local speeds!

Pick a reasonably static website and check the time to render it. Or
image it to your hard disk. Various spiderlike software exists to do
this - the owner of the website may take exception if you do it too
often or for large chunks of their content.

Most websites these days come up so quickly on a fast line that you may
need to time it in software rather than manually. The only ones that
don\'t are corporates with 100MB video files on the landing page :(

Internet access is pretty poor in Puerto Rico compared to Virginia.
It\'s not slow speeds, it\'s irregular service. It can be up for a
day, then spotty for a day, then out for an hour or two. A few of
the places I\'ve stayed had rolling IP addresses. Some websites that
require login would immediately kick me out saying my IP address had
changed. This would be continuous as if the IP was changing every
second! But it didn\'t happen with every site I logged into. What
was that about?

Many sites don\'t like it if your IP address changes mid session.

--
Regards,
Martin Brown
 
On 1/30/2022 1:01 PM, John Walliker wrote:
Each of those those various files will also have an associated domain name lookup, so changing to
a domain name server that responds more quickly can make a big difference.

Note that resolving *a* hostname requires contacting more than one name server.
Starting with \"your\" DNS server, your request gets redirected to other servers
until it finds the name you\'re looking for, already cached... *or*, directs
you to the (authoritative) name server for the targeted domain.

It\'s likely that \"google.com\" is already cached in *your* DNS (likely even on
your machine -- with a sufficiently long TTL). But, somejamoke.com will likely
require queries of the root name server, a TLD server and, finally, the
authoritative server for the domain sought.

So, resolving foo.com requires a request to \"your\" DNS (after names cached
on your host are checked). Then, a redirect to the root name server to find
the server for the \".com\" TLD. Then, contact *that* DNS -- which will give
you the contact information for foo.com\'s DNS. And, then contact that server
to resolve names within that domain (e.g., www.foo.com).

If a page references objects in a variety of domains (facebook.net, google,
etc.) then this process has to happen for all of them -- though typically only
once as the TTL will let them linger *in* your cache.

[Again, the \"well known\" domains are likely cached \"close to you\" as you or
your peers likely frequently encounter them in your travels]

And, of course, each HTTP request requires a TCP connection be established
(more involved than a simple UDP datagram) -- though a persistent connection
can be used TO EACH SITE to allow requests to be pipelined.
 
In article <3a7b6b65-36b8-4393-963a-0f6ef54cb9ben@googlegroups.com>,
Rick C <gnuarm.deletethisbit@gmail.com> wrote:
I\'ve always wondered how good Internet speed tests really are. They seem to have a tendency to have a significant delay to getting
started, then ramp up in speed for a few seconds before reaching a max speed for a while before ending. If you are downloading a
large file or streaming, I suppose that\'s a reasonable test. But if you are hitting web pages, it doesn\'t make sense to measure
the response time by transferring one large file.

That could very well be due to the \"slow start\" architecture in
TCP/IP. TCP connections deliberately start out with a relatively
modest \"receive window\" size, and then ramp up the window size once
the sending and receiving systems have exchanged enough data for a
meaningful evaluation of the actual performance.

The intent here (as I recall it) is to make sure that the sending
system (server) doesn\'t shove out a huge glob of data faster than the
receiving system can actually receive and process it. To do so, would
either fill up buffers in the intervening network switches/routers, or
cause packets to be dropped when the buffers overflow... and these
things have a bad effect on network performance and reliability.

This means that the initial connection for a web page can be a bit
slow getting started. Web browsers (and web servers) these days try
to overcome this by a couple of tricks:

- Most servers can handle multiple requests, one after another, on
a single TCP connection. Web browsers take advantage of this
by \"pipelining\" several such requests... once they receive the
main page from the server and start to parse it, they fire off
additional requests for the other resources mentioned in that
page which are on the same server. This allows the incoming
server-to-client connection to get past the slow-start stage
and deliver content at full speed.

- Web browsers will usually make connection to different servers
in parallel.

>I\'ve never seen a web site for measuring time to load web pages.

There\'s so much variation in how web pages are organized (number
of resources they fetch, sizes of those resources, where they are
fetched from) that it\'s awfully difficult to develop an honest
apples-and-apples comparison.

This problem is made even worse by the fact that many web
servers are \"virtual\" - that is, there are numerous \"clones\"
of a given content distribution server scattered around the net,
and DNS or other tricks are used to route your request to the
\"best\" (closest, fastest) server at any given instant in time.

As a result, if you run the same test twice a few minutes apart,
or switch provider networks to do a comparison, you may end up
fetching web content from a completely different set of servers.
 
On Monday, January 31, 2022 at 1:45:34 PM UTC-5, Dave Platt wrote:
In article <3a7b6b65-36b8-4393...@googlegroups.com>,
Rick C <gnuarm.del...@gmail.com> wrote:
I\'ve always wondered how good Internet speed tests really are. They seem to have a tendency to have a significant delay to getting
started, then ramp up in speed for a few seconds before reaching a max speed for a while before ending. If you are downloading a
large file or streaming, I suppose that\'s a reasonable test. But if you are hitting web pages, it doesn\'t make sense to measure
the response time by transferring one large file.
That could very well be due to the \"slow start\" architecture in
TCP/IP. TCP connections deliberately start out with a relatively
modest \"receive window\" size, and then ramp up the window size once
the sending and receiving systems have exchanged enough data for a
meaningful evaluation of the actual performance.

The intent here (as I recall it) is to make sure that the sending
system (server) doesn\'t shove out a huge glob of data faster than the
receiving system can actually receive and process it. To do so, would
either fill up buffers in the intervening network switches/routers, or
cause packets to be dropped when the buffers overflow... and these
things have a bad effect on network performance and reliability.

This means that the initial connection for a web page can be a bit
slow getting started. Web browsers (and web servers) these days try
to overcome this by a couple of tricks:

- Most servers can handle multiple requests, one after another, on
a single TCP connection. Web browsers take advantage of this
by \"pipelining\" several such requests... once they receive the
main page from the server and start to parse it, they fire off
additional requests for the other resources mentioned in that
page which are on the same server. This allows the incoming
server-to-client connection to get past the slow-start stage
and deliver content at full speed.

- Web browsers will usually make connection to different servers
in parallel.
I\'ve never seen a web site for measuring time to load web pages.
There\'s so much variation in how web pages are organized (number
of resources they fetch, sizes of those resources, where they are
fetched from) that it\'s awfully difficult to develop an honest
apples-and-apples comparison.

This problem is made even worse by the fact that many web
servers are \"virtual\" - that is, there are numerous \"clones\"
of a given content distribution server scattered around the net,
and DNS or other tricks are used to route your request to the
\"best\" (closest, fastest) server at any given instant in time.

As a result, if you run the same test twice a few minutes apart,
or switch provider networks to do a comparison, you may end up
fetching web content from a completely different set of servers.

If slow web surfing were the result of inherent mechanisms in the TCP/IP protocol, this would be observed on every computer, on every network through every ISP to every web site. I don\'t see this. I see wide variations in both raw speed and the time to get a transfer started. When there is a bottle neck in the system the speed tests often start at some very low rate and slowly ramp up to a more reasonable speed. On faster networks the ramp up is only a small amount with the initial transfer speed rather fast.

--

Rick C.

-- Get 1,000 miles of free Supercharging
-- Tesla referral code - https://ts.la/richard11209
 
On 1/31/2022 20:45, Dave Platt wrote:
In article <3a7b6b65-36b8-4393-963a-0f6ef54cb9ben@googlegroups.com>,
Rick C <gnuarm.deletethisbit@gmail.com> wrote:
I\'ve always wondered how good Internet speed tests really are. They seem to have a tendency to have a significant delay to getting
started, then ramp up in speed for a few seconds before reaching a max speed for a while before ending. If you are downloading a
large file or streaming, I suppose that\'s a reasonable test. But if you are hitting web pages, it doesn\'t make sense to measure
the response time by transferring one large file.

That could very well be due to the \"slow start\" architecture in
TCP/IP. TCP connections deliberately start out with a relatively
modest \"receive window\" size, and then ramp up the window size once
the sending and receiving systems have exchanged enough data for a
meaningful evaluation of the actual performance.

The slow start does exist and must be done by every tcp implementation
but it works the other way around.
It is the transmitting peer which is to start slowly, sensing the
tcp window behaviour of the receiving peer, taking into account the
RTT etc. (else you get what John recently called bang-bang loop,
and my association was with \"Chitty Chitty Bang Bang....)

Then the tcp window size can be negotiated during the syn/syn-ack
exchange to be 32 bit (default is 16 bit, it takes a tcp option to
get to 32 bits which is probably universally applied, 16 bit became
too little decades ago). At least my implementation for dps does it,
I have seen 16 bit connections, one ftp server IIRC was sticking to
16 but I have used it only locally so at <1ms RTT it is sort of OK.

Websites are slowed down in an annoying way almost 100% by ad services,
some google facebook you name it ad server gets overloaded and you
have to wait for it, websites are typically written with the ads
having priority over the info you are after, unsurprisingly.

DNS references/DNS server choice add some latency but if nothing is
wrong with the dns operation that influence is minor, even when
an uncached domain has to be located.

======================================================
Dimiter Popoff, TGI http://www.tgi-sci.com
======================================================
http://www.flickr.com/photos/didi_tgi/
 
On 31/01/22 20:22, Dimiter_Popoff wrote:
Websites are slowed down in an annoying way almost 100% by ad services,
some google facebook you name it ad server gets overloaded and you
have to wait for it, websites are typically written with the ads
having priority over the info you are after, unsurprisingly.

Ad servers are slow partly because they take time to /auction/
your eyeballs to the highest bidder. Yes, they pass info to many
of their customers, so their customers can decide how valuable
(or not) you are to them.

NoScript and AdBlock are necessary when browsing the web.
Carl Sagan foresaw the necessity for them in his novel Contact,
albeit with TV advertising rather than the web.
 
On 2/1/2022 0:28, Tom Gardner wrote:
On 31/01/22 20:22, Dimiter_Popoff wrote:
Websites are slowed down in an annoying way almost 100% by ad services,
some google facebook you name it ad server gets overloaded and you
have to wait for it, websites are typically written with the ads
having priority over the info you are after, unsurprisingly.

Ad servers are slow partly because they take time to /auction/
your eyeballs to the highest bidder. Yes, they pass info to many
of their customers, so their customers can decide how valuable
(or not) you are to them.

NoScript and AdBlock are necessary when browsing the web.
Carl Sagan foresaw the necessity for them in his novel Contact,
albeit with TV advertising rather than the web.

Probably all of your suspicions of what they do are correct and
likely we can\'t suspect enough :). Things become messier by the
day, lately the ISP-s (or some entity behind them) join in; for
example, since may be a month if I start facebook-browsing (I am
not a very active facebooker but some days I spend 5-10 minutes,
mainly looking at posts to local village groups and the poster\'s
profiles) after only 2-3 minutes of active browsing certain parts
become non-responsive (not the main page, just what it references,
photos, posts, menus etc.). And this is not facebook\'s fault, if
I log in via TOR things work just fine, so it is either the local
ISP or something between them and facebook, who knows. My guess is
the local ISP get too much facebook traffic (all those kids with
their phones) and limit it but it is as good as anybody\'s guess.

Anyway, I don\'t even try to guess who is doing what on the web.
I don\'t switch scripts off or use adblockers, I guess I don\'t
waste that much time browsing. Mostly the BBC website, football
scores etc., I don\'t know how much of these will work if I
block the ads, so far it is tolerable for me.
 
mandag den 31. januar 2022 kl. 23.51.27 UTC+1 skrev Dimiter Popoff:
On 2/1/2022 0:28, Tom Gardner wrote:
On 31/01/22 20:22, Dimiter_Popoff wrote:
Websites are slowed down in an annoying way almost 100% by ad services,
some google facebook you name it ad server gets overloaded and you
have to wait for it, websites are typically written with the ads
having priority over the info you are after, unsurprisingly.

Ad servers are slow partly because they take time to /auction/
your eyeballs to the highest bidder. Yes, they pass info to many
of their customers, so their customers can decide how valuable
(or not) you are to them.

NoScript and AdBlock are necessary when browsing the web.
Carl Sagan foresaw the necessity for them in his novel Contact,
albeit with TV advertising rather than the web.
Probably all of your suspicions of what they do are correct and
likely we can\'t suspect enough :). Things become messier by the
day, lately the ISP-s (or some entity behind them) join in; for
example, since may be a month if I start facebook-browsing (I am
not a very active facebooker but some days I spend 5-10 minutes,
mainly looking at posts to local village groups and the poster\'s
profiles) after only 2-3 minutes of active browsing certain parts
become non-responsive (not the main page, just what it references,
photos, posts, menus etc.). And this is not facebook\'s fault, if
I log in via TOR things work just fine, so it is either the local
ISP or something between them and facebook, who knows. My guess is
the local ISP get too much facebook traffic (all those kids with
their phones) and limit it but it is as good as anybody\'s guess.

browsing facebook is a drop in the ocean compared to watching a movie

Anyway, I don\'t even try to guess who is doing what on the web.
I don\'t switch scripts off or use adblockers, I guess I don\'t
waste that much time browsing. Mostly the BBC website, football
scores etc., I don\'t know how much of these will work if I
block the ads, so far it is tolerable for me.

everything still works, just a million times better

install something like adblock+ and you\'ll never go back
 
On 31/01/22 22:51, Dimiter_Popoff wrote:
On 2/1/2022 0:28, Tom Gardner wrote:
On 31/01/22 20:22, Dimiter_Popoff wrote:
Websites are slowed down in an annoying way almost 100% by ad services,
some google facebook you name it ad server gets overloaded and you
have to wait for it, websites are typically written with the ads
having priority over the info you are after, unsurprisingly.

Ad servers are slow partly because they take time to /auction/
your eyeballs to the highest bidder. Yes, they pass info to many
of their customers, so their customers can decide how valuable
(or not) you are to them.

NoScript and AdBlock are necessary when browsing the web.
Carl Sagan foresaw the necessity for them in his novel Contact,
albeit with TV advertising rather than the web.


Probably all of your suspicions of what they do are correct and
likely we can\'t suspect enough :). Things become messier by the
day, lately the ISP-s (or some entity behind them) join in; for
example, since may be a month if I start facebook-browsing (I am
not a very active facebooker but some days I spend 5-10 minutes,
mainly looking at posts to local village groups and the poster\'s
profiles) after only 2-3 minutes of active browsing certain parts
become non-responsive (not the main page, just what it references,
photos, posts, menus etc.). And this is not facebook\'s fault, if
I log in via TOR things work just fine, so it is either the local
ISP or something between them and facebook, who knows. My guess is
the local ISP get too much facebook traffic (all those kids with
their phones) and limit it but it is as good as anybody\'s guess.

Anyway, I don\'t even try to guess who is doing what on the web.
I don\'t switch scripts off or use adblockers, I guess I don\'t
waste that much time browsing. Mostly the BBC website, football
scores etc., I don\'t know how much of these will work if I
block the ads, so far it is tolerable for me.

Try adblock; you can always uninstall/disable it.

NoScript is more for the tin-foil hat brigade, and
requires nursing.

Some ISPs (IIRC Comcast? Verizon?) use deep packet inspection
/and modification/ to modify web pages and insert
/their/ chosen adverts.

Why do you think many pages have farcebook and twatter
(and other) logos on them? Basically it is so those
companies can tell which pages your browser has visited.
That takes time to complete.
 
On Monday, January 31, 2022 at 6:06:35 PM UTC-5, lang...@fonz.dk wrote:
mandag den 31. januar 2022 kl. 23.51.27 UTC+1 skrev Dimiter Popoff:
On 2/1/2022 0:28, Tom Gardner wrote:
On 31/01/22 20:22, Dimiter_Popoff wrote:
Websites are slowed down in an annoying way almost 100% by ad services,
some google facebook you name it ad server gets overloaded and you
have to wait for it, websites are typically written with the ads
having priority over the info you are after, unsurprisingly.

Ad servers are slow partly because they take time to /auction/
your eyeballs to the highest bidder. Yes, they pass info to many
of their customers, so their customers can decide how valuable
(or not) you are to them.

NoScript and AdBlock are necessary when browsing the web.
Carl Sagan foresaw the necessity for them in his novel Contact,
albeit with TV advertising rather than the web.
Probably all of your suspicions of what they do are correct and
likely we can\'t suspect enough :). Things become messier by the
day, lately the ISP-s (or some entity behind them) join in; for
example, since may be a month if I start facebook-browsing (I am
not a very active facebooker but some days I spend 5-10 minutes,
mainly looking at posts to local village groups and the poster\'s
profiles) after only 2-3 minutes of active browsing certain parts
become non-responsive (not the main page, just what it references,
photos, posts, menus etc.). And this is not facebook\'s fault, if
I log in via TOR things work just fine, so it is either the local
ISP or something between them and facebook, who knows. My guess is
the local ISP get too much facebook traffic (all those kids with
their phones) and limit it but it is as good as anybody\'s guess.
browsing facebook is a drop in the ocean compared to watching a movie

Anyway, I don\'t even try to guess who is doing what on the web.
I don\'t switch scripts off or use adblockers, I guess I don\'t
waste that much time browsing. Mostly the BBC website, football
scores etc., I don\'t know how much of these will work if I
block the ads, so far it is tolerable for me.
everything still works, just a million times better

install something like adblock+ and you\'ll never go back

I wonder, do you think this could have anything to do with the fact that I am using dial up?

--

Rick C.

-+ Get 1,000 miles of free Supercharging
-+ Tesla referral code - https://ts.la/richard11209
 
On Monday, January 31, 2022 at 6:06:35 PM UTC-5, lang...@fonz.dk wrote:
mandag den 31. januar 2022 kl. 23.51.27 UTC+1 skrev Dimiter Popoff:
On 2/1/2022 0:28, Tom Gardner wrote:
On 31/01/22 20:22, Dimiter_Popoff wrote:
Websites are slowed down in an annoying way almost 100% by ad services,
some google facebook you name it ad server gets overloaded and you
have to wait for it, websites are typically written with the ads
having priority over the info you are after, unsurprisingly.

Ad servers are slow partly because they take time to /auction/
your eyeballs to the highest bidder. Yes, they pass info to many
of their customers, so their customers can decide how valuable
(or not) you are to them.

NoScript and AdBlock are necessary when browsing the web.
Carl Sagan foresaw the necessity for them in his novel Contact,
albeit with TV advertising rather than the web.
Probably all of your suspicions of what they do are correct and
likely we can\'t suspect enough :). Things become messier by the
day, lately the ISP-s (or some entity behind them) join in; for
example, since may be a month if I start facebook-browsing (I am
not a very active facebooker but some days I spend 5-10 minutes,
mainly looking at posts to local village groups and the poster\'s
profiles) after only 2-3 minutes of active browsing certain parts
become non-responsive (not the main page, just what it references,
photos, posts, menus etc.). And this is not facebook\'s fault, if
I log in via TOR things work just fine, so it is either the local
ISP or something between them and facebook, who knows. My guess is
the local ISP get too much facebook traffic (all those kids with
their phones) and limit it but it is as good as anybody\'s guess.
browsing facebook is a drop in the ocean compared to watching a movie

Anyway, I don\'t even try to guess who is doing what on the web.
I don\'t switch scripts off or use adblockers, I guess I don\'t
waste that much time browsing. Mostly the BBC website, football
scores etc., I don\'t know how much of these will work if I
block the ads, so far it is tolerable for me.
everything still works, just a million times better

install something like adblock+ and you\'ll never go back

Except that it prevents you from accessing some sites. I suppose most of those are good riddance.

--

Rick C.

+- Get 1,000 miles of free Supercharging
+- Tesla referral code - https://ts.la/richard11209
 
tirsdag den 1. februar 2022 kl. 02.00.00 UTC+1 skrev gnuarm.del...@gmail.com:
On Monday, January 31, 2022 at 6:06:35 PM UTC-5, lang...@fonz.dk wrote:
mandag den 31. januar 2022 kl. 23.51.27 UTC+1 skrev Dimiter Popoff:
On 2/1/2022 0:28, Tom Gardner wrote:
On 31/01/22 20:22, Dimiter_Popoff wrote:
Websites are slowed down in an annoying way almost 100% by ad services,
some google facebook you name it ad server gets overloaded and you
have to wait for it, websites are typically written with the ads
having priority over the info you are after, unsurprisingly.

Ad servers are slow partly because they take time to /auction/
your eyeballs to the highest bidder. Yes, they pass info to many
of their customers, so their customers can decide how valuable
(or not) you are to them.

NoScript and AdBlock are necessary when browsing the web.
Carl Sagan foresaw the necessity for them in his novel Contact,
albeit with TV advertising rather than the web.
Probably all of your suspicions of what they do are correct and
likely we can\'t suspect enough :). Things become messier by the
day, lately the ISP-s (or some entity behind them) join in; for
example, since may be a month if I start facebook-browsing (I am
not a very active facebooker but some days I spend 5-10 minutes,
mainly looking at posts to local village groups and the poster\'s
profiles) after only 2-3 minutes of active browsing certain parts
become non-responsive (not the main page, just what it references,
photos, posts, menus etc.). And this is not facebook\'s fault, if
I log in via TOR things work just fine, so it is either the local
ISP or something between them and facebook, who knows. My guess is
the local ISP get too much facebook traffic (all those kids with
their phones) and limit it but it is as good as anybody\'s guess.
browsing facebook is a drop in the ocean compared to watching a movie

Anyway, I don\'t even try to guess who is doing what on the web.
I don\'t switch scripts off or use adblockers, I guess I don\'t
waste that much time browsing. Mostly the BBC website, football
scores etc., I don\'t know how much of these will work if I
block the ads, so far it is tolerable for me.
everything still works, just a million times better

install something like adblock+ and you\'ll never go back
Except that it prevents you from accessing some sites. I suppose most of those are good riddance.

very few sites and most of the blocks quickly get circumvented by the adblockers
 
On Monday, January 31, 2022 at 8:15:39 PM UTC-5, lang...@fonz.dk wrote:
tirsdag den 1. februar 2022 kl. 02.00.00 UTC+1 skrev gnuarm.del...@gmail.com:
On Monday, January 31, 2022 at 6:06:35 PM UTC-5, lang...@fonz.dk wrote:
mandag den 31. januar 2022 kl. 23.51.27 UTC+1 skrev Dimiter Popoff:
On 2/1/2022 0:28, Tom Gardner wrote:
On 31/01/22 20:22, Dimiter_Popoff wrote:
Websites are slowed down in an annoying way almost 100% by ad services,
some google facebook you name it ad server gets overloaded and you
have to wait for it, websites are typically written with the ads
having priority over the info you are after, unsurprisingly.

Ad servers are slow partly because they take time to /auction/
your eyeballs to the highest bidder. Yes, they pass info to many
of their customers, so their customers can decide how valuable
(or not) you are to them.

NoScript and AdBlock are necessary when browsing the web.
Carl Sagan foresaw the necessity for them in his novel Contact,
albeit with TV advertising rather than the web.
Probably all of your suspicions of what they do are correct and
likely we can\'t suspect enough :). Things become messier by the
day, lately the ISP-s (or some entity behind them) join in; for
example, since may be a month if I start facebook-browsing (I am
not a very active facebooker but some days I spend 5-10 minutes,
mainly looking at posts to local village groups and the poster\'s
profiles) after only 2-3 minutes of active browsing certain parts
become non-responsive (not the main page, just what it references,
photos, posts, menus etc.). And this is not facebook\'s fault, if
I log in via TOR things work just fine, so it is either the local
ISP or something between them and facebook, who knows. My guess is
the local ISP get too much facebook traffic (all those kids with
their phones) and limit it but it is as good as anybody\'s guess.
browsing facebook is a drop in the ocean compared to watching a movie

Anyway, I don\'t even try to guess who is doing what on the web.
I don\'t switch scripts off or use adblockers, I guess I don\'t
waste that much time browsing. Mostly the BBC website, football
scores etc., I don\'t know how much of these will work if I
block the ads, so far it is tolerable for me.
everything still works, just a million times better

install something like adblock+ and you\'ll never go back
Except that it prevents you from accessing some sites. I suppose most of those are good riddance.
very few sites and most of the blocks quickly get circumvented by the adblockers

I don\'t visit very many sites and they repeatedly block my access. I use google news and it is common for the referred web sites to block ad blockers by putting up a overlay telling you to turn off your ad blocker. Some you can tell to simply go away. Others are persistent and you can\'t view the page. Google news has a feature to let you block seeing the headlines of a site, so the ones which are persistent get removed from my view. Fox News recently bit the dust this way and that\'s one I would like to read. They may be a bit extreme, but not always and it is good to hear from all perspectives. I think I blocked Reuters as well because of their ad blocker block.

I use several browsers and have uBlock on this one (Firefox). I seem to be using AdBlock on Chrome. I also use a Comodo variant of the Chrome browser with their own tool that is more about security, but also blocks ads I believe. Then I use an AVG browser which of course, blocks ads and has security. I think I have the most trouble with AdBlock on Chrome, but that\'s probably because I view news on Chrome where most of the ads show up.

--

Rick C.

++ Get 1,000 miles of free Supercharging
++ Tesla referral code - https://ts.la/richard11209
 

Welcome to EDABoard.com

Sponsor

Back
Top