Time Machine backups in iCloud?

I’ve never heard any rumblings that Apple is considering Time Machine in iCloud, but it would make sense to me as a way for Apple to encourage customers to pay for more cloud storage. Obviously, iOS backups can already go to iCloud, and Apple had an offsite Backup app in the MobileMe days (though it wasn’t any good). So it’s not unthinkable. (See the comments on this article for more on MobileMe Backup.)

Unfortunately, iCloud backups don’t scale.

My old Mac mini has about 1TB of files. I use Time Machine to an external 3TB drive, along with two 2TB external drives that I use as system clones.

I don’t want to think about how long it would take to upload all that to iCloud, and I really don’t want to think about how many days (weeks?) it would take to perform a full-system restore from it afterward.

Maybe it will work for some people, but if Apple gets rid of their local storage solution, it will not make me switch to their cloud offering. It will force me to go buy a replacement solution from a third-party vendor.

2 Likes

Maybe that all changes when faster broadband is ubiquitous?

Will 5G replace coax for the last mile? What would 5G do for Time Machine?

I have Back Blaze to make sure that if I ever lose both my Mac and my external HD, I still have an off-site backup. BackBlaze only backs up user data, not programs or OS. I’ve never had to use it, but if you only need one file, you can retrieve it online. If you need you’re whole drive, they actually will send you your backup overnight on an external drive, which you are expected to return after you restore your data.

I’ve heard of friends setting up a drive swapping club, so they periodically give their friend their Time Machine drive for safekeeping and take their friend’s in return. Hopefully the friend is not your next door neighbor, because a flood or fire can wipe you both out. But someone from work maybe or a relative in another city.

The costs of providing 5G have to be less than trenching, or telephone poles, or other under grounding methods.

Or does underground Fiber do the job of replacing the Comcast et. al. al. co-ax?

5G doesn’t have the range of 4G, so more cell sites will be needed (good reason to invest in infrastructure backbone providers. These are the people that connect the cell sites to the server farms or provide high speed data for an entire business or campus. Not sure if they use embedded copper, fiber, or microwave or a combination.

But the companies providing your Internet have no interest in construction of last mile infrastructure so they outsource it to specialists. Wired solution work when they are running T-1 lines from campus to campus to handle huge amounts of traffic. It isn’t last mile consumer stuff.

Someday, we will have T-1 and T-3 speeds for consumers at affordable rates. But that is some years away.

1 Like

Same here. No way I will be backing up TB-size drives over the internet. It’s just not practical. Not as long as the peak I see on my Gigabit fiber to Apple is ~20 MB/s while the peak I see over USB-C is easily 300 MB/s. Plus, why should I do any of that and be forced to worry about Apple safeguarding my data when I can instead rely on myself at lower cost? Seems like a no-brainer to me.

Fixed-wireless has always been easy to install, but the bandwidth and reliability have always been less than what you can get from cable, fiber and metro-Ethernet service.

5G isn’t likely to change this. The huge bandwidth increases require use of millimeter-wave spectrum, which is very short range and will therefore probably not be deployed outside of dense urban environments - where you can get even faster wired speeds, if you’re so inclined.

We’ve passed those speeds a long time ago. A T1 line is approximately 1.5 Mbps and a T3 line is approximately 45 Mbps.

Unless you’re in a rural area where wireless is your only high speed option, faster speeds are usually available. Comcast offers residential service up to 200 Mbps (I currently pay about $80/mo for 100M service). Areas served by fiber are offering gigabit speeds at quite affordable prices.

I’ve even seen a few residential fiber companies that can provide 10 Gbit/s service. It costs more than I would want to spend, but not too horrible compared to other popular subscription services (like cable/satellite TV).

Even DSL lines, which are pretty slow these days, can usually deliver at least T1 speeds to most customers and many customers can get much better speeds.

But when it comes to backing up your computer, the speed of that final link from your ISP’s central office to your home is only one part of the equation. You still have to deal with the bandwidth along the entire path from your service provider to the remote server and the capacity of that server. When you have gigiabit speed Internet access, you often find yourself bottlenecked by those parts of the network - which you can’t do a thing about.

But even with a good solid connection all the way to the remote server that remains at top speed all the time, it’s still going to be slower than a USB 3 connection (5 or 10 Gbps) or Thunderbolt (10, 20 or 40 Gbps, depending on generation). Unless you have one of those 10 Gbps Internet links that I mentioned - that might theoretically be faster than USB, but they’re not commonplace at this time.

While the idea of keeping everything permanently backed up to a remote server somewhere is attractive on paper, I think it will never replace a good local backup strategy. At best, it will be a supplement for when everything else fails.

1 Like

Thanks for a good reply. I have been out of the tech workforce for a long time. T-# lines used to link Central Offices or you were Internet backbones for ISP’s. Things have clearly changed for the better.

So what do they call the lines that replaced T-3 that are the current Internet backbone that carry huge amounts of data?

I have been using Backblaze for backup for over 2 years, and once I moved to Gigabit fiber internet, I don’t even notice it backing up my TB drives.

I sincerely doubt TimeMachine will go to only internet based backup, but I suppose it is a slight possibility if Apple significantly upgrades their throughput.

The thing is, too much of the world doesn’t have access to Gigabit internet.

There are many many different standards that have evolved over the years. Please indulge me as I go into a bit of history first…

The T-carrier system is a system for using time-division multiplexing (TDM) to combine multiple analog (voice) phone lines. T1 multiplexes 24 voice lines. A T3 multiplexes 28 T1 lines (or 672 voice lines). There are also definitions for T2, T4 and T5, but those were almost never used.

When the phone network went digital, digital versions were defined. A DS0 is a digital version of a single voice line (64 Kbps). A DS1 multiplexes 24 DS0s and can also be used in “concatenated” form as a single fat pipe (about 1.5 Mbps). Similarly, a DS3 multiplexes 28 DS1 lines or 672 DS0 lines or can be used concatenated as a single pipe (about 45 Mbps). Europe has similar technology - an E1 multiplexes 32 channels (about 2 Mbps) and an E3 multiplexes 16 E1 lines (about 34 Mbps).

When phone companies started using fiber optics, they defined the SONET (Synchronous Optical NETworking) standards. Outside of North America, they use SDH (Synchronous Digital Hierarchy) which is the same, but with different names.

At the lowest level of SONET is the OC-1 (aka STM-0 in SDH). This is rarely used by itself, but it has a capacity of about 52 Mbps and is often used to carry a DS3 line over fiber optics. There is a hierarchy of multiplexing built over OC-1/STM-0 for defining higher bandwidth connections. OC-3/STM-1 multiplexes 3 OC1/STM-0 lines or can be used concatenated (about 155 Mbps). OC-12/STM-4 multiplexes 4 OC-3/STM-1 lines (about 622 Mbps). OC-48/STM-16 multiplexes 4 OC-12/STM-4 lines (about 2.4 Gbps). OC-192/STM-64 multiplexes 4 OC-48/STM-16 lines (about 10 Gbps). OC-768/STM-256 multiplexes 4 OC-192/STM-64 lines (about 40 Gbps).

Optical carrier lines (especially up to OC-48, less so at the higher speeds) were used to carry circuit-switched traffic (voice lines, trunk lines and leased lines) via multiplexing lower-speed lines and packet-switched traffic (Internet data) using the concatenated modes. There’s still a lot of SONET/SDH traffic and you can lease DS- and OC- lines from a phone company if you have a need for it.

These days, however, the high speed links (especially outside of legacy phone company networks) are based on Ethernet, especially carried over fiber optics. All of the Ethernet speeds you’re familiar with (10M, 100M, Gigabit, 10G) have fiber-optic equivalents. There are also much higher speeds that only exist over fiber (25G, 40G, 50G, 100G, 200G and 400G) and higher speeds are in R&D.

Finally, in case you thought 400 Gbit Ethernet was as fast as it gets, all of these optical technologies (SONET and Ethernet alike) all operate using a single wavelength (color) of light on the fiber. It is possible to run many different links over a single fiber by using a different wavelength for each one. This is known as WDM or Wavelength-Division Multiplexing. Depending on the transceivers used and the kind of fiber used, it is possible to carry up to 128 wavelengths over each fiber, each of which may be a complete SONET or Ethernet link.

Of course, there is a cost for these high speeds. Although you could theoretically carry 128 400GBASE-ZR (400G Ethernet using a single wavelength per line) lines over a single fiber (51 Tbps), only the largest corporations could afford the equipment, and there is rarely a need to carry that much bandwidth over a single fiber (with the big exception being fibers that carry traffic across the Atlantic and Pacific oceans).

In actual practice, you’ll find a pretty diverse mix of the above technologies used for the links between phone companies, Internet backbone providers and ISPs. Companies will run (or more commonly, lease from bigger companies) link speeds proportional to the amount of traffic they expect to carry over those links. A link that is too slow will create a bottleneck and will upset customers (and possibly expose them to lawsuits if guarantees were made), while a link that is too fast will cost too much.

Individual subscribers (typically residential and small business) typically get a line from their site to an ISP’s central office. This might be coaxial cable (cable modem), twisted-pair copper (DSL) or fiber optic. Fiber might be shared by multiple customers (e.g. PON (Passive Optical Networking) used by Verizon FiOS and AT&T U-Verse) or it could be dedicated point-to-point fibers to the central office.

Business customers that need (and can pay for) higher speeds can pretty much lease any kind of SONET or Ethernet link they need, but speeds higher than common commercial offerings may be very expensive.

In metropolitan areas, you often find SONET and Ethernet set up in various kinds of ring and bus technologies that cover the city, allowing subscribers to hook up to them using nearby junctions (in boxes or manholes placed strategically throughout the city).

The maximum speeds vary depending on the technology used, but in all cases, those speeds are just the link to the central office. Once your data gets there, then it travels over the lines your ISP leases to connect its central offices to each other and to other service providers. Peering is also used, in order to provide easy interconnection between service providers.

Hopefully, this helps. It’s quite a lot to digest, and I may not have gotten everything 100% correct, but I think it’s a pretty good summary of the current state of the Internet.

1 Like

Wow! Thank you for the education. So much I learned that I didn’t know.

FYI, I just saw this bit of news today:

https://www.convergedigest.com/2020/09/google-fiber-to-offer-2-gig-service-for.html

A really sweet deal if you happen to live in Nashville or Huntsville.