visitor maps

Translation-Traduction

Thursday, April 24, 2014

When it comes to net neutrality, either the FCC thinks we’re idiots, or it just doesn’t care

 

 

The proposed network neutrality rules the FCC is settling on don’t appear neutral at all. Here’s the conversation we should be having if the FCC really thinks our network policies need a rewrite.

Net neutrality

 

With its latest plan to twist the concept of network neutrality into something that appears to be the opposite of neutral, the Federal Communications Commission has revealed that it believes the public can’t understand the issues — or that it is so in thrall of the companies it regulates that it doesn’t care what ordinary people think.

The FCC’s plans for implementing network neutrality came to light Wednesday in a Wall Street Journal article. The plans took the hallmark of network neutrality — the notion that ISP shouldn’t discriminate between the traffic flowing over their networks — and turned it on its head. Under the proposed framework for so-called net neutrality, the FCC does away with the concept of non discrimination and instead offers up a new standard designed to prohibit “commercially unreasonable” practices.

Is this the pay-to-play internet model?

Tom Wheeler, pictured standing to the right of the president.

Tom Wheeler, pictured standing to the right of the president.

Most net neutrality advocates have understood the FCC’s decision to mean that the agency will allow ISPs to charge content companies for better traffic flow provided it isn’t “commercially unreasonable.”

It’s important to note that the FCC Chairman Tom Wheeler came out a few hours after the Journal article (and others) appeared to respond that the media has his policy plans “flat out wrong.” The statement, offered below, neglects to address the crucial aspect of his proposed change: the idea that there’s room for any commercial practices in delivering a customer’s network packets.

Here’s Wheeler’s statement:

“There are reports that the FCC is gutting the Open Internet rule. They are flat out wrong. Tomorrow we will circulate to the Commission a new Open Internet proposal that will restore the concepts of net neutrality consistent with the court’s ruling in January. There is no ‘turnaround in policy.’ The same rules will apply to all Internet content. As with the original Open Internet rules, and consistent with the court’s decision, behavior that harms consumers or competition will not be permitted.”

Whether or not you think this is a good idea, inserting any sort of commercial relationship into delivering last mile web content –outside of what the end-consumer pays the ISP — is not network neutrality. So let’s stop calling it that.

Turning a technical argument into a commercial one

The FCC should man up and say exactly what it is doing here: It is implementing a double-sided market for the internet that could allow businesses to enter into commercial relationships with ISPs — who do not operate in a competitive market in the U.S. — for faster delivery of their content. And because capacity on broadband networks is limited, the flip side is that companies that don’t pay will see their content delivered more slowly.

Many will see this as a battle between the Netflix’s of the world and the smaller video providers who might not be able to pay. But this is actually about differentiating between different classes of content. For example, if you are a streaming video provider, those faster speeds will probably affect the user experience. You’ll need to pay up, because your competitors certainly will and eventually the best effort access isn’t going to cut it — especially as traffic on networks increase.

Photo by Thinkstock/wx-bradwang

Photo by Thinkstock/wx-bradwang

However, if you are a backup company like Dropbox or Carbonite that can train users to send their files overnight, then you may not care about slower speeds. Because this is true: Not all web content is created equal. As we put more content online, many people knowledgeable about network infrastructure point out the ridiculousness of trying to build out an ever-expanding network that’s capable of handling Netflix traffic as if it were the same as a downloading software.

It’s like trying to build a highway that can handle Lamborghinis, Chevy Volts and bicycles all driving in the same lane. Instead, these network experts argue that we need to figure out how to divide the lanes of traffic while ensuring that all vehicles can travel on the road without discrimination. That’s actually a completely fair and legitimate debate to have, but I’m not sure that is the debate we’re going to be having if the FCC’s plans go through.

Where is the burden of proof in this standard?

That’s because instead of discussing the real challenges of managing the growing amount of traffic on the web that has different delivery requirements, the FCC is going to let the ISPs decide — not just how those lanes are divided, but also the rules that govern who can travel where and how much they should pay. It has said it will not allow blocking and that ISPs must be transparent, but this “commercially unreasonable” framework strikes me as putting the burden of proof on the consumer or injured party to complain to the FCC long after the horse has left the barn — or their packets have failed to reach the user.

I don’t think that’s the way this conversation should play out. The FCC and ISPs may argue that because the ISPs built the original roads (their underlying network infrastructure) that it is the ISP’s right to decide the rules of that road and how much people will pay to access it. But at some point since the FCC first declared that broadband was an information product and not subject to the common carrier rules at the heart of today’s network neutrality fight, broadband has become a utility for consumers and businesses.

The idea that we would let ISPs make decisions that could lead to ISPs setting commercial terms that would impose taxes on startups and existing companies all without ensuring any sort of lowered price for consumers or network upgrades from the ISPs, is ridiculous. Broadband networks are not a public utility, but they are the foundation for our economy.

And as such we owe it to all participants to have a real debate about how we’re going to deliver the exponential increase in network traffic over our private networks. That’s a debate that the FCC must referee, not after the damage has been done, but in advance. Instead of calling its efforts net neutrality when they clearly aren’t, it should be honest and point out that it thinks neutral networks won’t work given the technical demands we’re placing on the internet. Then we can have a conversation about if that’s the case, and then what we should do about it.

We can’t let ISPs operating in a duopoly just set the rules for us.

Absent competition, the proposed rules look like a way for ISPs to get more money, set rules that will affect the shape of what is developed on the internet, and do all of these things with no guarantees that consumers or the broadband economy get anything in return. I don’t find that reasonable at all.

IBM unveils Power8 and OpenPower pincer attack on Intel’s x86 server monopoly

 

IBM Power8 die shot, high res

IBM has taken the wraps off the first servers that are powered by its monstrously powerful Power8 CPUs. With more than 4 billion transistors, packed into a stupidly large 650-square-millimeter die built on IBM’s new 22nm SOI process, the 12-core (96-thread) Power8 CPU is one of the largest and probably the most powerful CPU ever built. In a separate move, IBM is opening up the entire Power8 architecture and technical documentation through the OpenPower Foundation, allowing third parties to make Power-based chips (much like ARM’s licensing model), and to allow for the creation of specialized coprocessors (GPUs, FPGAs, etc.) that link directly into the CPU’s memory space using IBM’s new CAPI interface. You will not be surprised to hear that Nvidia, Samsung, and Google — three huge players among hundreds more who are beholden to Intel’s server monopoly — are core members of the OpenPower Foundation. The Power8 CPU and the OpenPower Foundation are the cornerstones of a very big, well-orchestrated plan to finally put an end to x86′s reign, and place a fairer, more powerful architecture at the head of the server table.

First, we should talk about the new Power8 chip. There are 12 CPU cores, each with 512KB of L2 SRAM and 8MB of L3 EDRAM, for a total of 6MB L2 and 96MB L3 cache respectively. There is then a further 230GB/sec of bandwidth to 1TB of DRAM. Whereas each Intel Xeon core is capable of two-way simultaneous threading, and Power7+ cores can do four threads, Power8 ups the ante to eight simlutaneous threads (SMT). As you’d expect, other parts of the chip have been similarly expanded to cater for the Power8′s massive parallelism: There are eight decoders (up from 6), six dispatches per clock cycle, a doubling of load units (4), the data cache can now process four 128-bit transactions per cycle, and the bus width between the L2 and data cache is now 512 bits. Take a look at the block diagram below and be awed by its massive parallelism and throughput.

IBM Power8 microarchitecture block diagram

IBM Power8 microarchitecture block diagram [Image credit: The Linley Group]

We expect the Power8 will eventually be capable of clock speeds around 4.5GHz, with a TDP in the region of 250 watts. At this speed, the Power8 CPU will be around 60% faster than the Power7+ in single-threaded applications, and more than two times faster in multithreaded tasks. In certain cases, IBM says the Power8 is capable of analyzing Big Data workloads between 50 and 1,000 times faster than comparable x86 systems (the same amount of RAM, the same number of cores).

Compared to its competitors (Power 7+, the Oracle Sparc T5, the Intel Xeon), the Power8 is anywhere between two and three times more processing power per socket. This is mostly due to the massive thread count (96 vs. 30 for the latest 15-core E7-8890 v2 Xeon), and utterly insane memory bandwidth (230GB/sec vs. 85GB/sec). In terms of performance per watt, though, the Xeon (~150W TDP) is probably just ahead of the Power8 — but in general, when you’re talking servers, power consumption generally plays second fiddle to performance density (how many gigaflops you can squeeze out of a single server).

IBM Power8 CPU die shot, labeled

IBM Power8 CPU die, labeled

Beyond raw SPECint and SPECfp performance, Power8 also introduces CAPI (Coherence Attach Processor Interface). CAPI is a direct link into the CPU, allowing peripherals and coprocessors to communicate directly with the CPU, bypassing (substantial) operating system and driver overheads. CAPI is similar to Intel’s QPI, but where QPI is closed and proprietary, IBM is opening up CAPI to third parties. IBM’s Power Systems CTO, Satya Sharma, told me in an interview that in the case of flash memory attached via CAPI the overhead is reduced by a factor of 20. More importantly, though, CAPI can be used to attach coprocessors — GPUs, FPGAs — directly to the Power8 CPU for some truly insane workload-specific performance boosts. It is due to these CAPI-attached coprocessors that a Power8 system can be 1,000 times faster than a comparable x86 system.

 

The OpenPower Foundation

While the Power8 chip is veritably beastly, it will take a lot more than a fancy piece of hardware to dislodge Intel x86 as the undisputed king of servers (Intel chips currently power somewhere in the region of 95% of all servers.) What IBM needs is a full top-to-bottom Power architecture stack, from first-party and third-party hardware, through to a broad, healthy ISV (independent software vendor) ecosystem. This is where the OpenPower Foundation comes in.

Basically, IBM is making the Power8 architecture and detailed technical documentation open to members of the Foundation. Currently, the foundation consists of Altera, Google, Nvidia, Micron, Samsung, Tyan, ZTE, and others. Each of these members will use the Power documentation in different ways. Altera is developing FPGAs that connect directly into the Power8 chip via CAPI, to provide stupendous speed-ups for specific tasks. Tyan, with help from Google, will create third-party motherboards that are compatible with the Power8 chip, with the goal of producing cheap, Power8-based machines for internet-scale server farms. Nvidia, like Altera, will develop a Tesla-like GPU coprocessor that connects directly to the CPU via CAPI. Suzhou will license the Power architecture to make its own Power8-compatible chips for China’s domestic server market.

Taking down Intel

IBM's Power8 chip, backside. It's huge.

IBM’s giant Power8 chip, being held in a normal-sized hand.

The hope is that, by cultivating a broad hardware and software ecosystem, Power will be able to challenge Intel in the server space. IBM wants to be the ARM of servers, basically: In much the same way that ARM’s open architecture and licensing model allowed it to squash Intel in the mobile and embedded spaces, IBM wants to do the same thing in servers.

Usually I would say that it’s a fool’s errand to challenge Intel, but if anyone can do it, it’s IBM. There is a lot of antipathy towards Intel and the strategies it has used to dismantle everyone and everything that has threatened to disrupt its dominion over the computing industry. Server vendors (IBM, HP, Dell) and internet-scale service providers (Google, Facebook) use x86 chips, but only because Intel has ensured that there’s no other viable option. I don’t think there’s a single company that doesn’t want to get out from underneath the choking heft of Intel x86 — and now, at long last, IBM might be offering a way out. If the surge in mobile computing has taught us anything it’s that Intel isn’t unbeatable — that there’s a chink in its armor that IBM and the rest of the OpenPower Foundation think it can exploit. “We are entering some new spaces,” Sharma told me. “It’s a transformational event for Power. It’s going to take Power to new spaces we haven’t gone before.”

New Power8 servers, being tended to by a couple of IBMers

New Power8 servers, being tended to by a couple of IBMers

IBM also announced today that Canonical’s Ubuntu Server will be available for all Power8-based systems, and that it will continue to invest in Linux (IBM/Power is historically Unix-focused, not Linux). “Now is the time to expand into the Linux space,” Sharma said as our interview was wrapping up. “Ubuntu is now one of the primary targets for Power.”

The first Power8 servers will be available from June 10, with a range of 1- and 2-socket 2U and 4U models. The Power S812L and Power S822L (both 2U) will exclusively run Linux. The flagship of the Power8 line is the Power S824, a 4U design with two CPU sockets, maxing out at 24 cores (192 threads) and 1TB of RAM. The low-end Linux-powered S812L server starts at $8000. (IBM wouldn’t tell us the exact pricing of a standalone Power8 CPU, but it’s probably in the region of $5,000.)

Clubic.com - Articles / Tests / Dossiers