Use Apple’s networkQuality Tool to Test Internet Responsiveness

Originally published at: Use Apple’s networkQuality Tool to Test Internet Responsiveness - TidBITS

Apple’s networkQuality command-line tool in Monterey provides a new metric—“responsiveness”—that measures latency in a more realistic manner to better reflect your real-world experience with interactive Internet services like videoconference and gaming.

1 Like

I recently stumbled across an article in MacRumors describing a new (in Monterey) Apple utility for measuring network speeds. Here are results from a few quick tests that may be of interest.

My test environment:

  • M1 MacBook Air w/12.3.1
  • wired ethernet connection to GigE switch
  • GigE router connected to optical fibre, with 500/500 service from Bell Canada.

Ookla Speedtest measures around 620/510 Mbps.

Apple’s networkQuality utility (located at /usr/bin) gave much slower results.
An example:

M1-MacBookAir:~ david$ networkquality
==== SUMMARY ====
Upload capacity: 275.713 Mbps
Download capacity: 100.458 Mbps
Upload flows: 20
Download flows: 20
Responsiveness: High (3416 RPM)

I ran it a few times and got varying results: 91/201, 100/275, 102/350, 98/394.
The download speed is pretty consistent, and always measures less than the upload.
Sometimes much less.

Little Snitch reveals that the host name Apple is using to do the speedtests is, and that the servers used are part of clusters in NYC and Newark with IP addresses like:,,,
No doubt the hostname will resolve to different IPs depending on the location of the host doing the lookup.

The Ookla Speedtest program uses test servers that are ‘closer’ to my location (in Canada) than the NYC/NJ servers used by the networkQuality utility. I believe that Ookla uses ping tests to determine appropriate (nearby) server choices.

@ace will be pleased to know that if I select a test server at Cornell, Ookla measures 592/505 with ping time of 18msec. The Cornell campus has an excellent connection to the internet backbone, and thence to (at least) eastern Canada.

It’s not clear if the slower results obtained by the macOS utility are due to limitations in the networkQuality program, in the various Apple test servers, or bottlenecks in the internet connection from here to there. Probably some of each.

Most of the time, I don’t need a whole lot of details. I need a rough idea and usually more about downstream than up. That’s what is good at in spite of its various limitations.

Just a note that I moved the two previous posts into the article’s comments because they appeared literally minutes before I posted the article. :slight_smile:


I know it’s not specifically speed related but gee I miss the Network Utility app. I just don’t understand why Apple would remove such a useful utility which I used every week around the office.

1 Like

Apple’s mantra for many many years:

“Knowing what to leave out.”

Apple user experience for many many years:

“Where did ______ go? I use that all the time.”


1 Like

I kept a copy of Network Utility from Mojave and it continues to run just fine in Big Sur. Be advised, however, to keep a copy outside of the Applications folder since Apple re-deprecates it each time it does a system or security update
It was pointed out by another TidBITS Talk-er you can run ping, at least, in Terminal, too

I first became aware of this handy little executable back in early November on Dan Petrov’s blog (Blog | DanPetrov). A little later, I saw an app on the Apple Store titled "Network Quality Test. As far as I can tell, the author of that app charges 99¢ to open Terminal and type networkQuality.

And BTW, if you want your actual ping speed, just keep Terminal open and type ping followed by a space and the IP of your computer. Use Control C to stop and get a summary.

I did not know that Network Utility was gone because I never use it. I see that if you try to start it, you get this message: “For networking tools netstat, ping, dig, traceroute, whois, finger, open terminal and type the underlying command at the command line.” I think Apple did the right thing here. The Network Utility was a redundant frontend to the UNIX tools. Instructions for how to use them is one web search away.

For my videoconferencing I had concluded that using ethernet worked quite nice, wifi not so. The networkQuality tool confirms it.


Upload capacity: 63.310 Mbps
Download capacity: 82.433 Mbps
Upload flows: 20
Download flows: 12
Responsiveness: Medium (249 RPM)


Upload capacity: 167.930 Mbps
Download capacity: 338.662 Mbps
Upload flows: 20
Download flows: 12
Responsiveness: High (1474 RPM)

The server is in Copenhagen. Our Scandinavian sister country across the sea. Servers in Norway, Sweden or Finland would probably give better RPM.

In Terminal:

$ ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=55 time=12.406 ms
64 bytes from icmp_seq=1 ttl=55 time=12.113 ms

ping Norwegian newspaper website.

$ ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=248 time=4.821 ms
64 bytes from icmp_seq=1 ttl=248 time=3.530 ms

I’m intrigued that you saw such a difference between Ethernet and Wi-Fi. I tested that here too, but got exactly the same results with both, which suggested that my problems were at the router and not related to the LAN. I wonder if that implies that your Wi-Fi gateway has some issue that’s bypassed when you connect via Ethernet?

Yes, this is a limitation of my old Wi-Fi system I am well aware of. I have ethernet hubs in all rooms where we use computers. Via ethernet, my provider’s speed is the limiting factor. Wi-Fi is for surfing and light work. The reason I stick with my old Airport Extremes is that the Wi-Fi is very stable.

So how old is your Extreme? Mine is the final generation and I regularly see it push 600 Mbps across wifi. Granted, that’s not enough to exploit the 1 Gbps fiber we have, but close enough for my home purposes. AT&T recently came into our neighborhood offering 1/3 the bandwidth of our existing fiber plan at a colossal $3 savings. LOL. At least at 300 Mbps my APExtreme would no longer be the limiting factor. :wink: I think I’ll stick with our small local provider. I’ll consider the $3 a service charge for not having to deal with AT&T. :laughing:

1 Like

It is from 2011. Speed test to a Norwegian server via Extreme gives 500Mbps down and 350 up which is what I pay for.

The issue is that Network Utility was available on every Mac. When I was managing 40 or 50 machines at work, it was the first place to start when someone had a network, printing, web access issues. It’s hard to Google search instructions when their network is broken. It’s just another example of Apple deprecating a perfectly useful utility.

Why not just leave it there? it’s not like it would be costing them vast resource dollars.

1 Like

From the article: The likely reason that upload speed dropped when switching to Smart Queue Management is that it’s going to reserve some bandwidth for TCP ACKs and other replies - which improves the overall responsiveness of the connection, but at the expense of max single upload speed.

(I can’t guarantee that’s what it’s doing of course - but that’s a common networking trick if you’re manually setting up firewalls/etc.)


Thanks for the intro. Here’s mine just now:

==== SUMMARY ====
Upload capacity: 495.098 Mbps
Download capacity: 280.100 Mbps
Upload flows: 20
Download flows: 12
Responsiveness: High (4705 RPM)

I did the same sequence and got much lower results from the NetworkQuality process.

For instance, OOKLA said download 138 and upload 184 for our town operated ISP.
However, Network quality says download 126 and upload 34.

I’m having trouble understanding this huge difference in the upload numbers.

Yes, @dstaal, I think you’re basically right.

There used to be a standalone device called the Broadband Blaster, IIRC. It went in between your modem (DSL or Cable) and your router. Ethernet in, ethernet out, and power. That was all it had.

How could putting a device right there make a difference in your connection speed? It didn’t speed up your connection. It worked by making sure that your modem upload buffer was never full. If your upload buffer is full, anything you try to send out gets put in line behind a megabyte or more (however big your modem buffer is) of traffic.

If you’re browsing the web, each time you try to load a page there will be dozens of requests for page content that you send out into that buffer. So you have to wait for those requests to actually get out before you can ever start receiving the HTML, image, javascript and CSS files you need to render the page.

Cable modems are (or were) particularly bad at having large buffers that slowed down your round trip requests. This is why turning on Quality of Service always slows down your max upload speed. You don’t want any buffers getting full. And the only way to do that is to go slightly slower than your modem can handle for uploading so its buffer is always empty or almost empty. (The buffer only fills up when the modem has more stuff to upload than the connection can handle.) If you keep the buffer empty, then, the instant you try to send out a request, it will get out. Or the instant you try to send out another video frame, it will go out. And then, of course, you’ll get responses quicker, and the number of round trips you can get in a minute will go way up.

All of this just by slowing your uploads a little bit.

@Joseph There’s one additional thing which that can do, which I mentioned but takes a bit of lower-level understanding of the protocols: When you’re downloading something from anywhere using TCP, after every packet of data, your computer sends a small message back saying ‘I’ve got it’. If the other end doesn’t see that message within a certain amount of time, it assumes the packet didn’t get through, and sends it again.

But the routers in the middle usually just treat that reply as any other message, and if your upload is full are as likely to drop/delay it as any other packet you’re sending out. And if they do, you get the same download data multiple times, until one of your replies gets through.

However, if you reserve a bit of upload bandwidth and dedicate it to just those packets, then your downloads get through smoother and faster, with less stuttering. Add that to the issue you mention of not filling overly large buffers, and you’ve effectively made both directions more responsive, by just dedicating a bit of upload bandwidth.


Interesting. I didn’t realize that.

The difficulty of doing QOS well is that you need to know how much upload bandwidth is available right now. With network congestion, and typical ISP performance, the number can go up and down. That means the easiest way to improve QOS is to hold back a significant percent of the available upload bandwidth, just in case in an hour your bandwidth is lower.