Of course, those bleak days are mercifully behind us now -- but as carriers around the world start to light up a promising new generation of high-speed wireless networks, things are beginning to get a little confusing. Just what is "4G," anyway? It's one higher than 3G, sure, but does that necessarily mean it's better? Why are all four national carriers in the US suddenly calling their networks 4G? Is it all the same thing? Answering those questions requires that we take a take a little walk through wireless past, present, and future... but we think it's a walk you'll enjoy.
First things first: "G" stands for "generation," so when you hear someone refer to a "4G network," that means they're talking about a wireless network based on fourth-generation technology. And actually, it's the definition of a "generation" in this context that has us in this whole pickle in the first place; it's the reason why there's so much confusion. But more on that in a bit -- first, let's take a trip down memory lane into the primordial ooze that gave rise to the first generation way back in the day.
1G
Thing is, no one was thinking about data services in the 1G days; these were purely analog systems that were conceived and designed for voice calls and very little else. Modems existed that could communicate over these networks -- some handsets even had them built-in -- but because analog cellular connections were susceptible to far more noise than conventional landlines, transfer speeds were ridiculously slow. And even if they'd been fast, it wouldn't have really mattered; per-minute rates on AMPS networks in the 80s made cellphones luxuries and Wall Street powerbroker business necessities, not must-haves for the everyman. Besides, the technology didn't exist for an awesome smartphone that could consume that much data anyhow. Oh, and YouTube had yet to be invented. The stars simply hadn't yet aligned.
2G
Still, these nascent 2G standards didn't have intrinsic, tightly-coupled support for data services woven into them. Many such networks supported text messaging, though, so that was a start -- and they also supported something called CSD, circuit-switched data. CSD allowed you to place a dial-up data call digitally, so that the network's switching station was receiving actual ones and zeroes from you rather than the screech of an analog modem. Put simply, it meant that you could transfer data faster -- up to 14.4kbps, in fact, which made it about as fast as an early- to mid-nineties landline modem.
At the end of the day, though, CSD was a hack -- a way to repurpose these voice-centric networks for data. You still had to place a "call" to connect, so the service wasn't always available. The experience was very similar to using a dial-up modem at home: either you were online, or you weren't. Services like push email and instant messaging to your phone were basically science fiction. Furthermore, because a CSD connection was a call, you were burning minutes to get connected -- and these technologies were in play at a time when monthly minute buckets on cellular plans were measured in the dozens, not the hundreds or thousands. Unless you had a company writing a check for your wireless bill every month, using CSD for anything more than an occasional novelty wasn't practical.
2.5G: you know you're in trouble when you need a decimal place
The 4G identity crisis we're dealing with today really started well over a decade ago, around the time that standards bodies were hard at work finalizing 3G technologies. |
And that's the story of how GPRS got stuck as a tweener: better than 2G, not good enough to be 3G. It was important enough so that it might have earned the right to be called 3G had the ITU not already drawn the line, but that's how the cookie crumbles. Turns out it would just be the first of many, many generational schisms over the next decade.
3G, 3.5G, 3.75G... oh, and 2.75G, too
In addition to the aforementioned speed requirements, the ITU's official 3G specification also called out that compatible technologies should offer smooth migration paths from 2G networks. To that end, a standard called UMTS rose to the top as the 3G choice for GSM operators, and CDMA2000 came about as the backward-compatible successor to IS-95.
Following the precedent set by GPRS, CDMA2000 offered CDMA networks an "always-on" data connection in the form of a technology called 1xRTT. Here's where it gets a little confusing: even though CDMA2000 on the whole is officially a 3G standard, 1xRTT is only slightly faster than GPRS in real-world use -- 100kbps or so -- and therefore is usually lumped in with GPRS as a 2.5G standard. Fortunately, CDMA2000 also defined the more advanced 1xEV-DO protocol, and that's where the real 3G money was at, topping out at around 2.5Mbps.
So where would EDGE fit, then? Depends who you ask. It's not as fast as UMTS or EV-DO, so you might say it's not 3G. But it's clearly faster than GPRS, which means it should be better than 2.5G, right? Indeed, many folks would call EDGE a 2.75G technology, eliciting sighs from fraction-haters everywhere. The ITU doesn't help matters, officially referring to EDGE as an ITU-2000 Narrowband technology -- basically, a 2G standard capable of eking 3G-esque speeds.
As the decade rolled on, CDMA2000 networks would get a nifty software upgrade to EV-DO Revision A, offering slightly faster downlink speeds and significantly faster uplink speeds -- the original specification (called EV-DO Revision 0) only allowed for uploads of about 150kbps, impractical for the rampant picture and video sharing we're all doing with our phones and laptops these days. Revision A can do about ten times that. Can't very well lump an upgrade that big in with 3G, can you? 3.5G it is, then! Ditto for UMTS: HSDPA would add significantly faster downlink speeds, and HSUPA would do the same for the uplink.
Further refinements to UMTS would produce HSPA+, dual-carrier HSPA+, and HSPA+ Evolution, ranging in theoretical speeds from 14Mbps all the way past a mind-boggling 600Mbps. So, what's the deal? Is it safe to say we've hit a new generation yet, or is this just 3.75G the same way that EDGE was 2.75G?
Lies, damn lies, and 4G
Just as it did with the 3G standard -- IMT-2000 -- the ITU has taken ownership of 4G, bundling it into a specification known as IMT-Advanced. It's no slouch, either: the document calls for 4G technologies to deliver downlink speeds of 1Gbps when stationary and 100Mbps when mobile, roughly 500-fold and 250-fold improvements over IMT-2000, respectively. Those are truly wild speeds that would easily outstrip the average DSL or cable broadband connection, which is why the FCC has been so insistent that wireless technology plays a key role in getting broadband data to rural areas -- it's more cost-effective to plant a single 4G tower that can cover several dozen miles than it is to blanket farmland with fiber optics.
Where WiMAX and LTE fall short, though, is in raw speed. The former tops out at around 40Mbps and the latter around 100Mbps theoretical, while practical, real-world speeds on commercial networks so far have tended to range between around 4Mbps and 30Mbps -- well short of IMT-Advanced's lofty (and, arguably, most important) goal. Updates to these standards -- WiMAX 2 and LTE-Advanced, respectively -- promise to do the job, but neither has been finalized yet... and production networks that make use of them are still years away.
That said, you could still easily argue that the original WiMAX and LTE standards are authentically different enough from the classically-defined 3G standards to call them a true generational upgrade -- and indeed, most (if not all) of the carriers around the world that have deployed them have referred to them as "4G." It's an obvious marketing advantage for them, and the ITU -- for all the good it's trying to do -- has no jurisdiction to stop it. Both technologies (LTE in particular) will be deployed to many, many more carriers around the globe over the next several years, and the use of the "4G" moniker is only going to grow. It can't be stopped.
Arguably, it was T-Mobile's move that really sparked a fundamental rethinking of what '4G' means to the phone-buying public. |
Arguably, it was T-Mobile's move that really sparked a fundamental rethinking of what "4G" means to the phone-buying public. AT&T, which is in the process of upgrading to HSPA+ and will start offering LTE in some markets later this year, is calling both of these networks 4G -- and naturally, neither Sprint nor Verizon have even thought about backing down on their end. All four US national carriers seem entrenched at this point, having successfully stolen the 4G label from the ITU -- they've taken it, run with it, and reshaped it.
Wrap-up
So where does this all leave us? In short, carriers seem to have won this battle: the ITU recently backed down, saying that the term 4G "may also be applied to the forerunners of these technologies, LTE and WiMAX, and to other evolved 3G technologies providing a substantial level of improvement in performance and capabilities with respect to the initial third generation systems now deployed." And in a way, we think that's fair -- no one would argue that the so-called "4G" network of today resembles the 3G network of 2001. We can stream extremely high-quality video, upload huge files in the blink of an eye, and -- given the right circumstances -- even use some of these networks as DSL replacements. Sounds like a generational leap to us.
Whether WiMAX 2 and LTE-Advanced will ultimately be called "4G" by the time they're available is unclear, but our guess is that they won't -- the experience you'll have on those networks will be vastly different than the 4G of today. And let's be honest: the world's marketing departments have no shortage of Gs at their disposal.