visitor maps

Translation-Traduction

Monday, December 17, 2012

Computers that will see, hear, smell: Next, world domination?

 

By Deborah Netburn

December 17, 2012, 3:15 p.m.

IBM's 5 in 5 -- a list of five innovations that could change the world in five years -- focuses on how computers are developing the ability to taste, touch, hear, see and listen just like humans do, except way better.

It is kind of exciting and kind of terrifying, but mostly just really cool.

For example, Hendrik Hamann, a research manager of physical systems for IBM, describes a smartphone that could use a computerized nose to "smell" if we are sick. Forget the thermometer and the doctor's visit -- we will simply breathe into our cellphones to find out if we have the flu.

Robyn Schwartz describes how smartphones of the future might use vibrations to allow us to virtually "touch" a piece of material and feel its texture. This technology is already available for some video games, but she imagines a world in which online shoppers don't just see and read about an item of clothing, they can stroke it as well.

Dimitri Kanevsky, a research scientists at IBM, explains that sound sensors may be able to "hear" an earthquake coming, long before a human would sense it. And John Smith explains that a computer that knows how to make sense of what it can "see" would be able to diagnose a cancerous growth on your skin. 

Finally, Lav Varshney describes a computer program that can learn what pleases your taste buds on a molecular level, and can then design healthy recipes that taste delicious to you, based on that information. 

So far, so cool, right? But what made me feel a little scared was this paragraph in an essay by IBM's chief innovation officer, Bernard Meyerson, about this year's 5 in 5.

He writes:

"In the coming years, computers will become even more adept at dealing with complexity. Rather than depending on humans to write software programs that tell them what to do, they will program themselves so they can adapt to changing realities and expectations. They’ll learn by interacting with data in all of its forms -- numbers, text, video, etc. And, increasingly, they’ll be designed so they think more like the humans."

Later in the essay he explains that IBM is not interested in replacing human thinking with machine thinking. He actually says it twice. Instead he imagines a future where humans and machines work together to make the world a better place. 

I hope that prediction, at least, is correct.

Friday, November 16, 2012

Anonymous attacks Israeli websites in response to Gaza attacks

 

Anonymous says it has launched attacks on Israeli websites in response to attacks and threats by the IDF to cut Gaza's telecom links. Reports say the attacks brought down some Israeli sites temporarily, and defaced others with pro-Palestinian messages.

According to The Huffington Post, early on Thursday morning, Anonymous said it had launched "OpIsrael" campaign. The group explained that by issuing threats to cut Gaza telecommunications links, Israel had "crossed a line in the sand." The hacktivist group, said: "We are ANONYMOUS and NO ONE shuts down the Internet on our watch."

A statement by the group, said: "To the people of Gaza and the 'Occupied Territories,' know that Anonymous stands with you in this fight. We will do everything in our power to hinder the evil forces of the IDF arrayed against you. We will use all our resources to make certain you stay connected to the Internet and remain able to transmit your experiences to the world."

Anonymous later posted a message saying it had taken down Israeli's "top security and surveillance website," a claim Forbes described as "hyperbolic."

According to Forbes, the statement included a photo of what the group claimed were burning buildings in Gaza with a message that said: "We Anonymous will not sit back and watch a cowardly Zionist State demolish innocent people’s lives.”

Forbes reports that another message attributed to Pakistani Anonymous hackers, said, “The people of Pakistan are always supporting the brave people of Gaza, we love you!”

The Huffington Post reports Anonymous made a call to its followers on Twitter to bring down websites that belong to the Israeli government and its military.

Wednesday, November 14, 2012

HP unveils PowerCARD payment management software

06 November 2012  |   Source: HP

image        http://ellipticalmedia.com/images/AllianceONE_partner_black_000.png

HP today announced that banks, retailers and telecom operators can now boost the cost-effectiveness of their card operations with PowerCARD payment management software.

With this agreement, HP now offers customers a card payment solution across Europe, Middle East and Africa. This solution complements HP's existing card services utilities which are already available to clients in the Americas and in the Asia-Pacific regions.

HPS, an HP AllianceONE partner, is a leading payment software company who currently provides state-of-the-art card, ATM and POS management systems for over 320 financial institutions in 70 countries. As part of the HP AllianceONE program, HP and HPS will jointly offer outsourced payments solutions based on PowerCARD software in the Europe, Middle East and African (EMEA) region. HP offers a broad solution set for both in-house processing and outsourced services and works with clients globally to define the appropriate strategic fit for their cards and payments business.

"Organisations now have the opportunity to improve their card solutions by replacing legacy hardware and software with something much more flexible and cost-effective," says Ed Adshead-Grant, EMEA card and payments practice head, HP. "HP is viewed by many as the OEM of the payment industry, and our tests confirm PowerCARD as a robust, scalable complement to HP's own Payments Solutions for the EMEA region. Organisations of all sizes can now boost the cost-effectiveness and competitive positioning of their card issuing, switching and acquiring solutions."

In a series of rigorous tests conducted over five weeks on the latest HP Superdome2 servers, the HP European Performance Centre scaled PowerCARD payment software up to 100 million accounts with processing cycles of 3,000 transactions per second for up to 5,000 concurrent users. Performing online data and batch processing with economical HP-UX server infrastructures ensures that resource is readily available, and enables changes to products and services to be delivered in a fraction of the time taken by a mainframe solution.

"The HP tests clearly demonstrate that the reliability, availability and scalability required for 24x7 card processing services can be achieved by running PowerCARD software on HP-UX servers," says Abdeslam Alaoui, Managing Director, HPS. "The collaboration between HPS and HP in this project has been excellent. As an AllianceOne Partner, HPS will continue working with HP to deliver modernisation programs in electronic payment for any financial institution that is looking to win market share or escape the technology legacy trap."

HP AllianceONE is a comprehensive partner program for technology companies focused on HP's Converged Infrastructure strategy of providing a shared services model to deliver secure, best-in-class applications. 

Monday, October 22, 2012

Qatar's Qtel eyeing stake in Maroc Telecom-paper

 

PARIS Oct 22 (Reuters) - Qatar Telecom, the telecoms group controlled by Qatar, has expressed interest in Vivendi's controlling stake in Maroc Telecom, Morocco's largest telecoms operator, as part of the French media group's strategic review, the Financial Times reported on Monday.

QTel is preparing itself to make a bid for a 53 percent stake in Maroc Telecom but it faces competition from Etisalat, the United Arab Emirates-based telecoms company, the paper said, citing two sources close to the matter.

Earlier this month, Vivendi asked Credit Agricole and Lazard to gauge appetite the 53 percent stake in Maroc Telecom without giving them a formal mandate for the sale, sources familiar with the matter had told Reuters.

Vivendi declined to comment.

Wednesday, October 17, 2012

Google Throws Open Doors to Its Top-Secret Data Center

 

 

A server room in Council Bluffs, Iowa.

Photo: Google/Connie Zhou

If you’re looking for the beating heart of the digital age — a physical location where the scope, grandeur, and geekiness of the kingdom of bits become manifest—you could do a lot worse than Lenoir, North Carolina. This rural city of 18,000 was once rife with furniture factories. Now it’s the home of a Google data center.

Engineering prowess famously catapulted the 14-year-old search giant into its place as one of the world’s most successful, influential, and frighteningly powerful companies. Its constantly refined search algorithm changed the way we all access and even think about information. Its equally complex ad-auction platform is a perpetual money-minting machine. But other, less well-known engineering and strategic breakthroughs are arguably just as crucial to Google’s success: its ability to build, organize, and operate a huge network of servers and fiber-optic cables with an efficiency and speed that rocks physics on its heels. Google has spread its infrastructure across a global archipelago of massive buildings—a dozen or so information palaces in locales as diverse as Council Bluffs, Iowa; St. Ghislain, Belgium; and soon Hong Kong and Singapore—where an unspecified but huge number of machines process and deliver the continuing chronicle of human experience.

This is what makes Google Google: its physical network, its thousands of fiber miles, and those many thousands of servers that, in aggregate, add up to the mother of all clouds. This multibillion-dollar infrastructure allows the company to index 20 billion web pages a day. To handle more than 3 billion daily search queries. To conduct millions of ad auctions in real time. To offer free email storage to 425 million Gmail users. To zip millions of YouTube videos to users every day. To deliver search results before the user has finished typing the query. In the near future, when Google releases the wearable computing platform called Glass, this infrastructure will power its visual search results.

The problem for would-be bards attempting to sing of these data centers has been that, because Google sees its network as the ultimate competitive advantage, only critical employees have been permitted even a peek inside, a prohibition that has most certainly included bards. Until now.

A central cooling plant in Google’s Douglas County, Georgia, data center.
Photo: Google/Connie Zhou

Here I am, in a huge white building in Lenoir, standing near a reinforced door with a party of Googlers, ready to become that rarest of species: an outsider who has been inside one of the company’s data centers and seen the legendary server floor, referred to simply as “the floor.” My visit is the latest evidence that Google is relaxing its black-box policy. My hosts include Joe Kava, who’s in charge of building and maintaining Google’s data centers, and his colleague Vitaly Gudanets, who populates the facilities with computers and makes sure they run smoothly.

A sign outside the floor dictates that no one can enter without hearing protection, either salmon-colored earplugs that dispensers spit out like trail mix or panda-bear earmuffs like the ones worn by airline ground crews. (The noise is a high-pitched thrum from fans that control airflow.) We grab the plugs. Kava holds his hand up to a security scanner and opens the heavy door. Then we slip into a thunderdome of data …

Urs Hölzle had never stepped into a data center before he was hired by Sergey Brin and Larry Page. A hirsute, soft-spoken Swiss, Hölzle was on leave as a computer science professor at UC Santa Barbara in February 1999 when his new employers took him to the Exodus server facility in Santa Clara. Exodus was a colocation site, or colo, where multiple companies rent floor space. Google’s “cage” sat next to servers from eBay and other blue-chip Internet companies. But the search company’s array was the most densely packed and chaotic. Brin and Page were looking to upgrade the system, which often took a full 3.5 seconds to deliver search results and tended to crash on Mondays. They brought Hözle on to help drive the effort.

It wouldn’t be easy. Exodus was “a huge mess,” Hölzle later recalled. And the cramped hodgepodge would soon be strained even more. Google was not only processing millions of queries every week but also stepping up the frequency with which it indexed the web, gathering every bit of online information and putting it into a searchable format. AdWords—the service that invited advertisers to bid for placement alongside search results relevant to their wares—involved computation-heavy processes that were just as demanding as search. Page had also become obsessed with speed, with delivering search results so quickly that it gave the illusion of mind reading, a trick that required even more servers and connections. And the faster Google delivered results, the more popular it became, creating an even greater burden. Meanwhile, the company was adding other applications, including a mail service that would require instant access to many petabytes of storage. Worse yet, the tech downturn that left many data centers underpopulated in the late ’90s was ending, and Google’s future leasing deals would become much more costly.

For Google to succeed, it would have to build and operate its own data centers—and figure out how to do it more cheaply and efficiently than anyone had before. The mission was codenamed Willpower. Its first built-from-scratch data center was in The Dalles, a city in Oregon near the Columbia River.

Hözle and his team designed the $600 million facility in light of a radical insight: Server rooms did not have to be kept so cold. The machines throw off prodigious amounts of heat. Traditionally, data centers cool them off with giant computer room air conditioners, or CRACs, typically jammed under raised floors and cranked up to arctic levels. That requires massive amounts of energy; data centers consume up to 1.5 percent of all the electricity in the world.

Data centers consume up to 1.5 percent of all the world’s electricity.

Google realized that the so-called cold aisle in front of the machines could be kept at a relatively balmy 80 degrees or so—workers could wear shorts and T-shirts instead of the standard sweaters. And the “hot aisle,” a tightly enclosed space where the heat pours from the rear of the servers, could be allowed to hit around 120 degrees. That heat could be absorbed by coils filled with water, which would then be pumped out of the building and cooled before being circulated back inside. Add that to the long list of Google’s accomplishments: The company broke its CRAC habit.

Google also figured out money-saving ways to cool that water. Many data centers relied on energy-gobbling chillers, but Google’s big data centers usually employ giant towers where the hot water trickles down through the equivalent of vast radiators, some of it evaporating and the remainder attaining room temperature or lower by the time it reaches the bottom. In its Belgium facility, Google uses recycled industrial canal water for the cooling; in Finland it uses seawater.

The company’s analysis of electrical flow unearthed another source of waste: the bulky uninterrupted-power-supply systems that protected servers from power disruptions in most data centers. Not only did they leak electricity, they also required their own cooling systems. But because Google designed the racks on which it placed its machines, it could make space for backup batteries next to each server, doing away with the big UPS units altogether. According to Joe Kava, that scheme reduced electricity loss by about 15 percent.

All of these innovations helped Google achieve unprecedented energy savings. The standard measurement of data center efficiency is called power usage effectiveness, or PUE. A perfect number is 1.0, meaning all the power drawn by the facility is put to use. Experts considered 2.0—indicating half the power is wasted—to be a reasonable number for a data center. Google was getting an unprecedented 1.2.

For years Google didn’t share what it was up to. “Our core advantage really was a massive computer network, more massive than probably anyone else’s in the world,” says Jim Reese, who helped set up the company’s servers. “We realized that it might not be in our best interest to let our competitors know.”

But stealth had its drawbacks. Google was on record as being an exemplar of green practices. In 2007 the company committed formally to carbon neutrality, meaning that every molecule of carbon produced by its activities—from operating its cooling units to running its diesel generators—had to be canceled by offsets. Maintaining secrecy about energy savings undercut that ideal: If competitors knew how much energy Google was saving, they’d try to match those results, and that could make a real environmental impact. Also, the stonewalling, particularly regarding The Dalles facility, was becoming almost comical. Google’s ownership had become a matter of public record, but the company still refused to acknowledge it.

In 2009, at an event dubbed the Efficient Data Center Summit, Google announced its latest PUE results and hinted at some of its techniques. It marked a turning point for the industry, and now companies like Facebook and Yahoo report similar PUEs.

Make no mistake, though: The green that motivates Google involves presidential portraiture. “Of course we love to save energy,” Hölzle says. “But take something like Gmail. We would lose a fair amount of money on Gmail if we did our data centers and servers the conventional way. Because of our efficiency, we can make the cost small enough that we can give it away for free.”

Google’s breakthroughs extend well beyond energy. Indeed, while Google is still thought of as an Internet company, it has also grown into one of the world’s largest hardware manufacturers, thanks to the fact that it builds much of its own equipment. In 1999, Hözle bought parts for 2,000 stripped-down “breadboards” from “three guys who had an electronics shop.” By going homebrew and eliminating unneeded components, Google built a batch of servers for about $1,500 apiece, instead of the then-standard $5,000. Hölzle, Page, and a third engineer designed the rigs themselves. “It wasn’t really ‘designed,’” Hölzle says, gesturing with air quotes.

More than a dozen generations of Google servers later, the company now takes a much more sophisticated approach. Google knows exactly what it needs inside its rigorously controlled data centers—speed, power, and good connections—and saves money by not buying unnecessary extras. (No graphics cards, for instance, since these machines never power a screen. And no enclosures, because the motherboards go straight into the racks.) The same principle applies to its networking equipment, some of which Google began building a few years ago.

Outside the Council Bluffs data center, radiator-like cooling towers chill water from the server floor down to room temperature.
Photo: Google/Connie Zhou

So far, though, there’s one area where Google hasn’t ventured: designing its own chips. But the company’s VP of platforms, Bart Sano, implies that even that could change. “I’d never say never,” he says. “In fact, I get that question every year. From Larry.”

Even if you reimagine the data center, the advantage won’t mean much if you can’t get all those bits out to customers speedily and reliably. And so Google has launched an attempt to wrap the world in fiber. In the early 2000s, taking advantage of the failure of some telecom operations, it began buying up abandoned fiber-optic networks, paying pennies on the dollar. Now, through acquisition, swaps, and actually laying down thousands of strands, the company has built a mighty empire of glass.

But when you’ve got a property like YouTube, you’ve got to do even more. It would be slow and burdensome to have millions of people grabbing videos from Google’s few data centers. So Google installs its own server racks in various outposts of its network—mini data centers, sometimes connected directly to ISPs like Comcast or AT&T—and stuffs them with popular videos. That means that if you stream, say, a Carly Rae Jepsen video, you probably aren’t getting it from Lenoir or The Dalles but from some colo just a few miles from where you are.

Over the years, Google has also built a software system that allows it to manage its countless servers as if they were one giant entity. Its in-house developers can act like puppet masters, dispatching thousands of computers to perform tasks as easily as running a single machine. In 2002 its scientists created Google File System, which smoothly distributes files across many machines. MapReduce, a Google system for writing cloud-based applications, was so successful that an open source version called Hadoop has become an industry standard. Google also created software to tackle a knotty issue facing all huge data operations: When tasks come pouring into the center, how do you determine instantly and most efficiently which machines can best afford to take on the work? Google has solved this “load-balancing” issue with an automated system called Borg.

These innovations allow Google to fulfill an idea embodied in a 2009 paper written by Hözle and one of his top lieutenants, computer scientist Luiz Barroso: “The computing platform of interest no longer resembles a pizza box or a refrigerator but a warehouse full of computers … We must treat the data center itself as one massive warehouse-scale computer.”

This is tremendously empowering for the people who write Google code. Just as your computer is a single device that runs different programs simultaneously—and you don’t have to worry about which part is running which application—Google engineers can treat seas of servers like a single unit. They just write their production code, and the system distributes it across a server floor they will likely never be authorized to visit. “If you’re an average engineer here, you can be completely oblivious,” Hözle says. “You can order x petabytes of storage or whatever, and you have no idea what actually happens.”

But of course, none of this infrastructure is any good if it isn’t reliable. Google has innovated its own answer for that problem as well—one that involves a surprising ingredient for a company built on algorithms and automation: people.

At 3 am on a chilly winter morning, a small cadre of engineers begin to attack Google. First they take down the internal corporate network that serves the company’s Mountain View, California, campus. Later the team attempts to disrupt various Google data centers by causing leaks in the water pipes and staging protests outside the gates—in hopes of distracting attention from intruders who try to steal data-packed disks from the servers. They mess with various services, including the company’s ad network. They take a data center in the Netherlands offline. Then comes the coup de grâce—cutting most of Google’s fiber connection to Asia.

Turns out this is an inside job. The attackers, working from a conference room on the fringes of the campus, are actually Googlers, part of the company’s Site Reliability Engineering team, the people with ultimate responsibility for keeping Google and its services running. SREs are not merely troubleshooters but engineers who are also in charge of getting production code onto the “bare metal” of the servers; many are embedded in product groups for services like Gmail or search. Upon becoming an SRE, members of this geek SEAL team are presented with leather jackets bearing a military-style insignia patch. Every year, the SREs run this simulated war—called DiRT (disaster recovery testing)—on Google’s infrastructure. The attack may be fake, but it’s almost indistinguishable from reality: Incident managers must go through response procedures as if they were really happening. In some cases, actual functioning services are messed with. If the teams in charge can’t figure out fixes and patches to keep things running, the attacks must be aborted so real users won’t be affected. In classic Google fashion, the DiRT team always adds a goofy element to its dead-serious test—a loony narrative written by a member of the attack team. This year it involves a Twin Peaks-style supernatural phenomenon that supposedly caused the disturbances. Previous DiRTs were attributed to zombies or aliens.

Some halls in Google’s Hamina, Finland, data center remain vacant—for now.
Photo: Google/Connie Zhou

As the first attack begins, Kripa Krishnan, an upbeat engineer who heads the annual exercise, explains the rules to about 20 SREs in a conference room already littered with junk food. “Do not attempt to fix anything,” she says. “As far as the people on the job are concerned, we do not exist. If we’re really lucky, we won’t break anything.” Then she pulls the plug—for real—on the campus network. The team monitors the phone lines and IRC channels to see when the Google incident managers on call around the world notice that something is wrong. It takes only five minutes for someone in Europe to discover the problem, and he immediately begins contacting others.

“My role is to come up with big tests that really expose weaknesses,” Krishnan says. “Over the years, we’ve also become braver in how much we’re willing to disrupt in order to make sure everything works.” How did Google do this time? Pretty well. Despite the outages in the corporate network, executive chair Eric Schmidt was able to run a scheduled global all-hands meeting. The imaginary demonstrators were placated by imaginary pizza. Even shutting down three-fourths of Google’s Asia traffic capacity didn’t shut out the continent, thanks to extensive caching. “This is the best DiRT ever!” Krishnan exclaimed at one point.

The SRE program began when Hözle charged an engineer named Ben Treynor with making Google’s network fail-safe. This was especially tricky for a massive company like Google that is constantly tweaking its systems and services—after all, the easiest way to stabilize it would be to freeze all change. Treynor ended up rethinking the very concept of reliability. Instead of trying to build a system that never failed, he gave each service a budget—an amount of downtime it was permitted to have. Then he made sure that Google’s engineers used that time productively. “Let’s say we wanted Google+ to run 99.95 percent of the time,” Hözle says. “We want to make sure we don’t get that downtime for stupid reasons, like we weren’t paying attention. We want that downtime because we push something new.”

Nevertheless, accidents do happen—as Sabrina Farmer learned on the morning of April 17, 2012. Farmer, who had been the lead SRE on the Gmail team for a little over a year, was attending a routine design review session. Suddenly an engineer burst into the room, blurting out, “Something big is happening!” Indeed: For 1.4 percent of users (a large number of people), Gmail was down. Soon reports of the outage were all over Twitter and tech sites. They were even bleeding into mainstream news.

The conference room transformed into a war room. Collaborating with a peer group in Zurich, Farmer launched a forensic investigation. A breakthrough came when one of her Gmail SREs sheepishly admitted, “I pushed a change on Friday that might have affected this.” Those responsible for vetting the change hadn’t been meticulous, and when some Gmail users tried to access their mail, various replicas of their data across the system were no longer in sync. To keep the data safe, the system froze them out.

The diagnosis had taken 20 minutes, designing the fix 25 minutes more—pretty good. But the event went down as a Google blunder. “It’s pretty painful when SREs trigger a response,” Farmer says. “But I’m happy no one lost data.” Nonetheless, she’ll be happier if her future crises are limited to DiRT-borne zombie attacks.

One scenario that dirt never envisioned was the presence of a reporter on a server floor. But here I am in Lenoir, earplugs in place, with Joe Kava motioning me inside.

We have passed through the heavy gate outside the facility, with remote-control barriers evoking the Korean DMZ. We have walked through the business offices, decked out in Nascar regalia. (Every Google data center has a decorative theme.) We have toured the control room, where LCD dashboards monitor every conceivable metric. Later we will climb up to catwalks to examine the giant cooling towers and backup electric generators, which look like Beatle-esque submarines, only green. We will don hard hats and tour the construction site of a second data center just up the hill. And we will stare at a rugged chunk of land that one day will hold a third mammoth computational facility.

But now we enter the floor. Big doesn’t begin to describe it. Row after row of server racks seem to stretch to eternity. Joe Montana in his prime could not throw a football the length of it.

During my interviews with Googlers, the idea of hot aisles and cold aisles has been an abstraction, but on the floor everything becomes clear. The cold aisle refers to the general room temperature—which Kava confirms is 77 degrees. The hot aisle is the narrow space between the backsides of two rows of servers, tightly enclosed by sheet metal on the ends. A nest of copper coils absorbs the heat. Above are huge fans, which sound like jet engines jacked through Marshall amps.

The huge fans sound like jet engines jacked through Marshall amps.

We walk between the server rows. All the cables and plugs are in front, so no one has to crack open the sheet metal and venture into the hot aisle, thereby becoming barbecue meat. (When someone does have to head back there, the servers are shut down.) Every server has a sticker with a code that identifies its exact address, useful if something goes wrong. The servers have thick black batteries alongside. Everything is uniform and in place—nothing like the spaghetti tangles of Google’s long-ago Exodus era.

Blue lights twinkle, indicating … what? A web search? Someone’s Gmail message? A Glass calendar event floating in front of Sergey’s eyeball? It could be anything.

Every so often a worker appears—a long-haired dude in shorts propelling himself by scooter, or a woman in a T-shirt who’s pushing a cart with a laptop on top and dispensing repair parts to servers like a psychiatric nurse handing out meds. (In fact, the area on the floor that holds the replacement gear is called the pharmacy.)

How many servers does Google employ? It’s a question that has dogged observers since the company built its first data center. It has long stuck to “hundreds of thousands.” (There are 49,923 operating in the Lenoir facility on the day of my visit.) I will later come across a clue when I get a peek inside Google’s data center R&D facility in Mountain View. In a secure area, there’s a row of motherboards fixed to the wall, an honor roll of generations of Google’s homebrewed servers. One sits atop a tiny embossed plaque that reads july 9, 2008. google’s millionth server. But executives explain that this is a cumulative number, not necessarily an indication that Google has a million servers in operation at once.

Wandering the cold aisles of Lenoir, I realize that the magic number, if it is even obtainable, is basically meaningless. Today’s machines, with multicore processors and other advances, have many times the power and utility of earlier versions. A single Google server circa 2012 may be the equivalent of 20 servers from a previous generation. In any case, Google thinks in terms of clusters—huge numbers of machines that act together to provide a service or run an application. “An individual server means nothing,” Hözle says. “We track computer power as an abstract metric.” It’s the realization of a concept Hözle and Barroso spelled out three years ago: the data center as a computer.

As we leave the floor, I feel almost levitated by my peek inside Google’s inner sanctum. But a few weeks later, back at the Googleplex in Mountain View, I realize that my epiphanies have limited shelf life. Google’s intention is to render the data center I visited obsolete. “Once our people get used to our 2013 buildings and clusters,” Hözle says, “they’re going to complain about the current ones.”

Asked in what areas one might expect change, Hözle mentions data center and cluster design, speed of deployment, and flexibility. Then he stops short. “This is one thing I can’t talk about,” he says, a smile cracking his bearded visage, “because we’ve spent our own blood, sweat, and tears. I want others to spend their own blood, sweat, and tears making the same discoveries.” Google may be dedicated to providing access to all the world’s data, but some information it’s still keeping to itself.

 

Google Throws Open Doors to Its Top-Secret Data Center

Amazing IT : Explore a Google data center with Street View

 

IT are Aamazing

Mastercard under fire for tracking customer credit card purchases to sell to advertisers

 

  • Credit card firm refuses to reveal 'proprietary' technique that allows it to anonymously track customers and target them with online ads

  • Privacy campaigners accuse firm of 'treating details of our personal behaviour like their own property'
  • System tracks information about the date, time, amount and merchant
  • Credit card firm says system is only operational in US 

 

Mastercard has come under fire for tracking its US customer's purchases and selling the data to advertisers.

The credit card company’s MasterCard Advisors Media Solutions Group boasts it can target the most affluent customers and tell advertisers who is most likely to buy their products.

The firm does this by tracking a consumer's credit card details - although it says their identity remains secret.

Scroll down for video

Mastercard has admitted it tracks its customers transaction so that it can sell data to advertisers about their spending habits - but claims the system never reveals their personal details.

Mastercard has admitted it tracks its customers transaction so that it can sell data to advertisers about their spending habits - but claims the system never reveals their personal details.

However, it refuses to reveal how the system works, and a privacy group today accused the firm of 'treating details of our personal behaviour like their own property.'

'The foundation of all of our solutions is transaction data,' Susan Grossman, senior vice president at MasterCard Advisors Media Solutions Group said in a presentation seen by MailOnline.

When a consumer swipes a credit card in a store, she says MasterCard’s data-packaging division receives information about the date, time, amount and merchant.

The firm tracks billions of anonymous transactions from customers, which it then aggregates into small segments which comprise of similar transactions.

More...

This allows the firm to sell details of these very specific 'segments' of data to advertisers.

'What if you could know the biggest week for spend and then reach those shoppers who are twice as likely to spend leading up to that week and then create campaigns?', the firm asks in an online presentation .

Called 'Leveraging MasterCard Data Insights to Reach Holiday Shoppers', the presentation is designed to attract advertisers.

However, the firm refuses to reveal how offline MasterCard purchases would follow you online to make you a target for specific ads.

In the presentation, Grossman called MasterCard’s methods 'proprietary.'

The online presentation which reveals Mastercard's tracking of its customer's purchases to sell to advertisers.

The online presentation which reveals Mastercard's tracking of its customer's purchases to sell to advertisers.

But she says none of the data collected or sold includes personally identifiable information such as names or addresses.

MasterCard, which processes 34 billion transactions a year in 210 countries and territories, said it started the initiative in February.

Ms Grossman said protecting privacy was 'core' to MasterCard’s values.

'We recognise that consumers entrust us with their information so it is of the utmost importance that we ensure no individual or personally identifiable information is used in our media solutions product,' she said.

Ms Grossman said the company’s main clients for much of their history had been their issuing banks and for them MasterCard did a significant amount of statistical modelling and predictive modelling and found that those propensity models were application to companies other than banks, such as media.

Nick Pickles, director of privacy campaign group Big Brother Watch, said: 'If this data has value, then it should be up to Mastercard to ask customers for permission to use their information and offer consumers something in return.

'Instead they are treating details of our personal behaviour like their own property to be bundled up and sold on without any regard to what customers might want.

The slides reveal that Mastercard collects transaction from its customer to use to target advertisments at them

The slides reveal that Mastercard collects transaction from its customer to use to target advertisments at them

'Have Mastercard made any effort to seek customer’s consent for processing their shopping habits and selling it on? How do consumers opt out? It’s exactly this kind of behaviour that leads consumers to question whether companies are more interested in their own profit than respecting people’s privacy.'

Mastercard today confirmed the scheme's existence.

A spokesman for Mastercard said: 'MasterCard is committed to protecting individual privacy.

'No personally identifiable information is collected, disclosed or used in the analysis and development of any products or services.

'In creating MasterCard Audiences, MasterCard uses aggregated and anonymized transaction data.

'MasterCard’s transaction data does not contain the cardholder’s name or any other personal data.

'The service leverages anonymized and aggregated transaction data to provide clients with insight into trends around US consumer buying behavior based on custom audience segments or specified categories including Restaurant, Hotel, Travel, Retail, Financial Services, Automotive, Entertainment, and Telco/Cable.'

The firm also said the scheme was only running in the US.

In the presentation the firm boasts of having access to 1.8 billion payment cards for its data

In the presentation the firm boasts of having access to 1.8 billion payment cards for its data

VIDEO: MasterCard reveals the "anatomy of a transaction":

http://www.dailymail.co.uk/sciencetech/article-2219069/Mastercard-tracking-purchases-sell-advertisers.html

Mastercard under fire for tracking customer credit card purchases to sell to advertisers

Tuesday, October 16, 2012

PayPal completes a 60-day migration to Oracle Exadata

 

SAN FRANCISCO -- It was shortly after PayPal had tested the Oracle Exadata X2-8 box that corporate executives sent a mandate to IT: Ramp up your compute and storage capacity tenfold, and do it fast.

Normally, this sort of project takes several years. But PayPal was growing fast and needed to quickly meet its service level agreements (SLAs). In particular, it was looking to cut down transaction response times to about 40 mm as opposed to the 160 to 400 mm they were getting with their Solaris SPARC infrastructure. And PayPal wanted to get the project done in months, not years.

Amit Das, engineering architect at PayPal, said the company was growing exponentially and, at its peak, handling 500 payments per second. Das described a fast-paced online transaction processing (OLTP) environment with more than 500 Oracle Database instances, up to 14,000 concurrent processes, and 80,000 executions per second. The company's popular Web front end needed a lot of compute muscle on the back end.

 

Can you really compare Exadata with SAP HANA?

Das had become familiar with Exadata well before the project began. Prior to joining PayPal last year, he worked with Oracle and was technical lead for the world's first production "go-live" for Exadata, at Apple Inc. He also has more than a decade of experience working with Oracle Real Application Clusters (RAC).

Earlier this year, Das and members of the IT team at PayPal began exploring the idea of using Oracle Exadata for the necessary ramp-up. It ended up taking about 60 days from pilot testing to production, he said.

PayPal installed Exadata "clusters" in two data centers. The new setup includes production clusters, standby clusters, and a test and development cluster. Each production cluster includes a four-node RAC configuration with 64 Exadata storage cells and two Exadata X2-8s. The total amount of space on each cluster: 131 TB.

The company deployed its production clusters in about five days. It synced up its existing information stores to Exadata using Oracle GoldenGate and performed data validation. They then completed an end-to-end application switchover, which only required 10 minutes of downtime.

"Most of that time was due to restarting the application tier," Das said. "PayPal is very happy with Exadata. It is meeting all our SLAs."

PayPal is not done. Das said that the company is interested in the Exadata X3-2 models, which Oracle officially announced on Sept. 30. The new Exadata machines boast faster processing and better capacity than the X2-8.

PayPal is also taking a look at Oracle Database 12c, which Oracle unveiled this week at its annual OpenWorld conference. Das said he is particularly interested in Oracle Database 12c's Pluggable Databases feature as well as improvements to RAC.

Tuesday, September 18, 2012

Samsung: Hey, doesn’t the iPhone 4 look a lot like our old MP3 players

Chris Davies, Sep 18th 2012 Discuss [3]

Samsung has ramped up its PR offensive in the aftermath of the $1bn Apple ruling, highlighting similarities between its media players and the iPhone 4 launched later on. There “is a lot of misunderstanding” about design inspiration, Samsung wrote on its official blog, and it intends to use “objective facts, one by one, to reveal the truth [and] resolve all the misunderstandings.”

First step of that process is wheeling out the Samsung YP-Q3, a media player from back in early 2010, and one which Samsung believes is surprisingly similar to the industrial design of Apple’s iPhone 4. That smartphone actually launched in 2010 as well, but two months later than Samsung’s PMP did.

It’s not the only Samsung-branded gadget that pre-empted the iPhone 4 with some passably similar styling, the company would like to point out. The YP-Zt, another media player, also had a now-distinctive black fascia and silver surround, with crisp corners like the iPhone 4/4S.

Then, of course, there’s the Samsung F700. The 2006 touchscreen smartphone was a center point of Samsung’s defense strategy for the iPhone design suit, though it failed to help the company escape final censure.

It’s not the first time we’ve seen Samsung look to sway public opinion when its ability to convince the court was less than successful. The Korean company encountered a furious Apple and an angry judge after it released a dossier of design arguments contrasting the iPhone’s style with its own design output. Whether this latest attempt will prove at all useful for Samsung’s appeal is questionable.

Friday, September 14, 2012

IPhone 5 bill of materials estimate is $167.50

 

OTTAWA – The bill of materials for Apple’s new IPhone 5 comes in at an estimated $167.50 for the 16-Gb version, or about $35 higher than a comparable version of the IPhone 4S, according to a preliminary estimate by the tear down specialists at TechInsights.

The preliminary analysis by TechInsights, which is owned by UBM, the publisher of EE Times, is based on initial information about the IPhone 5 that was released by Apple in San Francisco on Wednesday (Sept. 12). The IPhone 5, which is scheduled to go on sale Sept. 21, is expected to sell for about $199 for the 16-Gb version.

TechInsight analysts also based their estimates on previous versions of 16-Gb IPhones, including the known features and cost of components for the IPhone 4 and 4S.

http://eetimes.com/ContentEETimes/Images/iPhone5%20BOM%20comparison_702.JPG

Saturday, September 8, 2012

The Infamous Google Hackers Are Still Out There, Exploiting Our Computers

Nearly three years ago, Google was hacked by a group that was almost certainly sponsored by the Chinese Government. But as Wired tells it, the assignment for that group wasn’t a one-off thing. In fact, they’ve executed no fewer than eight zero-day attacks on websites over the past three years, and have compromised at least 1000 computers in various sectors.

The news originally came from a research report compiled by Symantec, which says the group went after US companies in various sectors, including defence, energy, technology and finance, not to mention Chinese dissidents. All of these attacks revolved around zero-day exploits, in which the hackers — dubbed the Elderwood Group — discover any vulnerabilities and launch an attack before a developer is even aware of the issue. In 2011, there were eight total. In the past few months, Symantec says the group has pulled off four.

Wired believe it takes a sophisticated team to pull off something so complex.

In these so-called “watering hole” attacks — named for their similarity to a lion waiting for unsuspecting prey to arrive at a watering hole — an invisible iframe on the web site causes victim computers to contact a server and silently download a backdoor Trojan that gives the attackers control over the victim’s machine.

Symantec believes the gang involves several teams of varying skills and duties. One team of highly skilled programmers is likely tasked with finding zero-day vulnerabilities, writing exploits, crafting re-usable platform tools, and infecting web sites; while a less skilled team is involved with identifying targets based on various goals — stealing design documents for a military product or tracking the activities of human rights activists — and sending out the spear-phishing attacks. A third team is likely tasked with reviewing and analysing the intelligence and intellectual property stolen from victims.

But how did Symantec trace these attacks back to the Elderwood Group? Well, as it turns out, many of the same code snippets and executable files used in the Google attack were used in nearly all of the later attacks. Given how active this group is, their seemingly direct ties to China and America’s grandstanding about cybertheats, the thought of a Cyberwar with China might not be too far fetched. [Symantec via Wired]

Apple takes Iphone 5 memory orders away from Samsung

 

Apple logo

APPLE HAS CUT DRAM and NAND memory module orders from its mobile devices arch-rival Samsung as it tries to move away sourcing most of its parts from its biggest competitor.

As Apple's lawyers do battle with Samsung in courtrooms around the world, the relationship between the two companies is complicated by the fact that Apple needs Samsung's memory chips and fabs, while Apple remains Samsung's largest single customer. Now reports are emerging that Apple has shifted its DRAM and NAND memory orders away from Samsung as it tries to diversify from a single source of silicon.

Apple has tapped SK Hynix and Elpida to supply it with memory modules for its upcoming Iphone 5. According to Reuters' source, Apple has kept Samsung as a supplier for the memory in the Iphone 5 though it didn't elaborate further. It is very likely Apple is still using Samsung as a wafer baker for its A series of system-on-chip (SoC) processors.

Reuters' source said, "Samsung is still in the list of initial memory chip suppliers [for new Iphones]. But Apple orders have been trending down and Samsung is making up for the reduced order from others, notably Samsung's handset business."

SK Hynix is the second largest DRAM manufacturer behind Samsung and that makes it an obvious candidate to replace Samsung. Elpida however is in the midst of bankruptcy, with shareholders arguing over whether Micron's bid to buy the firm offers a high enough price.

Apple has strongly hinted that it is looking move business away from Samsung, if for no other reason than to ensure resilience in its supply chain. Apple has also been under a bit of pressure to source more components within the US, and should Micron complete its purchase of Elpida, that shift of ownership could sit well with both firms' supporters. µ

The Inquirer (http://s.tt/1mM90)

Thursday, September 6, 2012

Apple Responds to UDID Leak, Says Did Not Provide Data to FBI

 

Wednesday September 5, 2012 10:04 am PDT by Eric Slivka

AllThingsD reports that Apple has issued a statement responding to this week's leak of one million unique device identifiers (UDIDs) for iOS devices, noting that it did not provide the FBI with the information. An FBI computer was claimed by the hackers to be the source of the information, but the FBI has denied any involvement in the situation.

“The FBI has not requested this information from Apple, nor have we provided it to the FBI or any organization. Additionally, with iOS 6 we introduced a new set of APIs meant to replace the use of the UDID and will soon be banning the use of UDID,” Apple spokesperson Natalie Kerris told AllThingsD.

With the AntiSec hackers claiming to be in possession of 12 million UDIDs as well as additional personal information tied to some of the numbers, it remains unclear exactly where the data came from.

Apple has been working to phase out use of the UDID, creating new tools to allow developers to track usage of their apps on a per-device basis. With the UDID being a universal identifier, it has been used by advertisers and others to collect information across apps and other usage to develop user profiles for marketing persons, and Apple's new system will seek to improve user privacy.

Saturday, September 1, 2012

Benguerir, Morocco | Mohammed VI University | L'Université Polytechnique de Benguérir

Situated in the heart of the New Green Town of Benguerir, the Mohamed VI University will be one of the leading research hubs for global environmental research and development and will respect environmental quality and sustainable development principles. The development favours an organic layout similar to the medinas where a hierarchy of spaces and the creation of spatial ele- ments and varied volumes is possible. It relies on the contrast of light and shadow which takes into consideration natural elements (wind - sun - cold - rain) and creates the site's topography - the basis of the project

image

Client: Groupe OCP
Project: Construction of the Mohammed VI University in Benguerir
Selection method: Internal tender by invitation
Team: Groupe-6 (agent, architecture and town planning) - BDP (landscaping and sustainable development) - Agence Rachid Andaloussi - Aziz Lazrak Architect
Competition: 2011
Net surface area: 300,000 m2
Construction costs: €232 M excl. VAT (2007 estimated value)

Project directors: Alan Hennessy, Michel Rafin
Project leader: Gianni La Cognata
Architectural team: Céline Ouedraogo, Antoine Buisseret
Assistant: Jocelyne Buisson

Credits & Pictures

4da76b6b4b1fbGroupe-6_Maroc_UnivMohammedVI_01 4da76b9fb72d5Groupe-6_Maroc_UnivMohammedVI_05 4da76b791c316Groupe-6_Maroc_UnivMohammedVI_02 4da76b9206e7dGroupe-6_Maroc_UnivMohammedVI_04 4da76b86567aaGroupe-6_Maroc_UnivMohammedVI_03

Source : GROUP 6 , OCP

Monday, August 27, 2012

Rich Karlgaard: Apple's Lawsuit Sent a Message to Google

 

Last week Apple made headlines twice. On Monday it broke the world record for shareholder value. Apple's $623.5 billion market cap beat Microsoft's record from tech's notorious bubble era. (Microsoft needed a price-to-earnings ratio of 72 in 1999 to set the record. Apple's ratio is a modest 16.) Then on Friday, Apple won a $1.05 billion patent-infringement judgment against Samsung, the Korean electronics giant and the maker of the Galaxy line of smartphones that stirred Apple's ire.

Congratulations, Apple—twice. But these two coinciding events should give us pause.

One, how badly has Apple been hurt by copycats if it has become the richest company on earth? Do we want a patent system in which the strongest sue everyone else? Is this good for innovation?

Two, Apple lost the jury trial, in a federal court in San Jose, Calif., on most of its hardware claims, such as a ridiculous patent on curved glass for phone surface design. Apple won mostly on software, such as "pinch and stretch," a nifty design trick Apple introduced in 2007 with its first iPhone. So why did Apple sue Samsung, the Galaxy hardware manufacturer, and not Google, maker of the phone's Android software?

Apple sees Google as its chief competitor—this is no secret. Steve Jobs so hated Google's Android that, even as he struggled with cancer, he told biographer Walter Isaacson: "Google . . . ripped off the iPhone, wholesale ripped us off. I will spend my last dying breath if I need to, and I will spend every penny of Apple's $40 billion in the bank, to right this wrong. I'm going to destroy Android, because it's a stolen product. . . . I'm willing to go thermonuclear on this."

It is revealing that Jobs spent precious energy in such an outburst. As a longtime Silicon Valley observer, I believe the real story is not what it seems. The source of Jobsian rage was not his Google loathing, per se. It was fear that Apple might be "Microsofted" again.

Some history: As many people know by now, Apple founder Steve Jobs and Macintosh computer designer Bill Atkinson drew heavily from the work of Xerox's Palo Alto Research Center. In the 1970s, PARC had developed a computer called Alto. The computer featured all kinds of new stuff, including a mouse and pop-up windows. Jobs visited PARC in 1979 and a light switched on. A day or two later, Jobs met with an industrial designer and ordered him to build a prototype computer with a mouse. Thus was born the Apple Macintosh, which made its debut in 1984.

Enlarge Image

image

AFP/Getty Images

Apple's iPhone (left) and Samsung Electronic's Galaxy S mobile phone.

Did Apple steal from Xerox PARC or not? In the broadest sense, yes. The visit to PARC did more than inspire Steve Jobs. It sent him directly on a mission to build something very much like the Alto. But Jobs being Jobs, he immediately had ideas for improvement. The mouse should have one button, not three. It should work on any surface. It should be cheap to manufacture. The pop-up windows should look this way, not that way.

Jobs swiped the idea and made it better. But Macintosh was only modestly successful in the market, and Jobs was asked to leave Apple in 1985.

Meanwhile, his baby-boomer rival, Bill Gates, had introduced Microsoft Windows software in 1983. It wasn't pretty, and it didn't work well until version three in 1986, two years after the Macintosh's arrival. But it incorporated several Apple features, and the personal-computer industry built around Windows software soon boomed and grew to immense size. Microsoft PCs crushed the Macintosh market share, which fell to 3% by the late 1990s.

In the mind of Steve Jobs, I believe, the story was this: Even if he did copy the idea of the Xerox Alto, he added so much value that the copying barely amounted to technological petty larceny; Microsoft, by contrast, just ripped off Apple without improving it.

What Bill Gates improved, of course, was not Apple's software but the entire business model for personal computing. That's how Microsoft came to dominate personal computing for a generation. That's how Microsoft beat the market-cap world record and held it until Apple topped it nearly 13 years later.

Jobs deeply feared a replay of this business-model history. He feared that Google was going to pull a Microsoft and once again reduce Apple's products to a pricey niche. To Jobs, Android looked like the new Windows.

So why doesn't Apple sue Google directly, instead of suing a Google hardware partner like Samsung? Politics and public relations, mainly. Apple knows that suing a foreign giant will go down a lot better than suing a Silicon Valley neighbor. Apple enjoys huge favor right now among customers, politicians and the public. Suing Google would divide Apple's support and tarnish the company's image. So Apple sued a foreign company to send a message to Google.

This techno-Shakespearian story is entertaining but is bad for the phone-buying public. (Tablet patents were also part of the Apple-Samsung court case, but smartphones were at the heart of the lawsuit.) As Samsung contemplates filing an appeal, it appears that smartphone-makers may begin redesigning their products to avoid crossing swords with Apple.

Last week I bought a Samsung Galaxy Note phone. It is a marvel of machinery. It is larger, slimmer and lighter than Apple's iPhone. The Samsung Note's screen is so large that people who see it think I must have acquired an early version of the mini-iPad that Apple is expected to release soon. The Note takes the iPhone hardware design and makes it significantly better.

Funny. That's just what Apple did with the Xerox Alto.

Friday, July 6, 2012

Pire que l’ACTA, le projet INDECT ?

 

 

IndectOn ne peut pas dire que le gouvernement soit cachottier, mais bizarrement, il y a des projets dont on entend nettement moins parler. Il est donc temps de vous familiariser avec le plan Indect, qu’on pourrait largement qualifier de cyber-espion.

Fondé par l’union européenne, il s’agit d’un système d’information intelligent soutenant l’observation, la recherche et la détection pour la sécurité des citoyens en milieu urbain. Lancé en toute discrétion le 1er janvier 2009, son objectif principal est de détecter automatiquement les menaces, comportements anormaux ou violence.

Au demeurant fort louable, sa mise en place pose en revanche quelques soucis. Si le projet abouti, Minority Report sera une réalité bien tangible, ce Big Brother épiera tous les faits et gestes des internautes qu’il conservera bien au chaud dans ses serveurs. Indect sera d’ailleurs lié à une base de données regroupant les fichiers policiers et les fichiers biométriques d’identité.

Douteux ? Polémique ? Pensez-vous … Le projet est passé par un examen éthique le 15 mars 2011 à Bruxelles où il a été examiné par des experts Autrichiens, Français, Hollandais, Allemands, et Britanniques. Il a été déclaré viable et sans vice. Sous couvert de développer un outil de protection, nous risquons donc d’être espionné 24/24.

Si vous voulez vous renseigner, il y a bien un site officiel en anglais, mais aucun document sur la Toile. Heureusement, le site est traduisible en polonais …

Après SOPA, PIPA, ACTA … Veuillez-vous lever pour L’infect l’Indect !

http://www.indect-project.eu/

Monday, May 28, 2012

First study on Morocco’s retail Islamic finance sector launched

IFAAS announces the launch of the first independent study of Morocco’s emerging retail Islamic finance sector: Islamic Finance in Morocco – sizing the retail market

IFAAS (Islamic Finance Advisory & Assurance Services), the international Islamic finance consultancy, has announced the imminent launch of its exclusive report entitled, Islamic Finance in Morocco – sizing the retail market, analysing the consumer retail market for Islamic financial products and services in Morocco. The report is the first of its kind for the country and is due to be launched in June. It is the result of an independent survey performed on a representative random sample of the Moroccan population across all major regions of the Kingdom. 

Islamic Finance in Morocco – sizing the retail market, sets out the market opportunities for financial institutions with interest in the Moroccan market.  The report measures the potential market size for Islamic retail banking, finance and Islamic insurance Takaful and assesses how it will compete with mainstream, conventional finance.  

This report will be of particular importance for financial institutions looking to set-up their Islamic operations in Morocco as IFAAS’ report provides full analysis of the consumer demand for Islamic finance in the Kingdom.  It profiles consumers according to their existing use of financial products and services, evaluates their attitudes towards Islamic Finance and reports on their tendency to take out Islamic products and services.  The report also analyses consumer understanding of how Islamic financial products and services work and their likely behaviour when Islamic financial products become available in the Moroccan market. 

With Islamic Finance in Morocco – sizing the retail market, bankers and insurers with interests in the Moroccan market will find answers to a number of key questions including, how receptive are Moroccan consumers to switch from conventional to Islamic products; under which conditions and how quickly? Which Islamic finance products are most desired? How price sensitive is the Moroccan consumer and would more expensive products be acceptable? Do Moroccan consumers understand the difference between a fully-fledged Islamic bank and an Islamic window of a conventional bank? How important is the institution’s compliance with Shari’ah principles and its Shari’ah Board rulings for the consumer? How much new business is anticipated with the launch of Islamic financial products in the country? In a nutshell, IFAAS’ report provides a comprehensive overview on the real potential of the retail Islamic finance within the Moroccan market.

Commenting on the forthcoming launch of the report, Farrukh Raza, managing director of IFAAS said, “Decision makers looking to develop a retail offering need concrete data and consumer insights in order to make critical business decisions”.  IFAAS’ report, Islamic Finance in Morocco – sizing the retail market, based on scientifically validated information, fulfils the demand for this data enabling financial institutions to build appropriate business and product strategies. The report is a must-have for any institution considering its next move in the nascent Moroccan Islamic finance sector.” 

IFAAS commissioned a highly reputed local research firm to independently undertake the quantitative survey. Random, face-to-face, street interviews were conducted on a weighted sample size of over 800 individuals, reflecting a true picture of the Moroccan consumer market... The target sample was composed of men and women aged 18 to 55 years, from a variety of socio-economic categories, living in urban and rural areas and consisted of both banked and unbanked groups of the population.  In terms of geographical coverage, the study was conducted in towns and surrounding rural municipalities of Casablanca, Rabat, Marrakech, Agadir, Fez, Tangier and Oujda.

Thursday, May 24, 2012

Google Lawyer Touts Oracle Trial Victory

 Google Inc.'s general counsel lauded the courtroom victory won by the Internet giant Wednesday against Oracle Corp. as a warning for firms considering filing patent litigation in the future.

A federal jury had decided earlier in the day that Google didn't infringe two of Oracle's patents that protect its Java technology, as Oracle had alleged.

image

The win for Google came as its high-profile San Francisco trial with Oracle over both patent and copyright claims related to Google's Android mobile phone software drags into its second month, and appears to be drawing to a close.

"I think you've seen a lot of patent cases filed lately, and most of them have not resulted in successful outcomes for plaintiffs," said Google General Counsel Kent Walker. "That may send a message to those who might want to do these things in the future."

Mr. Walker declined to say how much Google has spent to defend itself against Oracle's infringement allegations, but said such lawsuits can cost about $5 million per patent to defend.

Oracle originally asserted seven patents against Google, though that number had been whittled down to two by the time the trial began last month.

It wasn't immediately clear if Oracle will appeal the patent verdict.

An Oracle spokeswoman declined to comment.

The patent verdict capped a second phase of the ongoing trial. A first phase had been focused on Oracle's copyright infringement claims.

That first phase ended with a mixed verdict, as the jury found Google infringed on copyrights protecting Java interfaces, but couldn't decide if that was acceptable under the fair use doctrine—which allows for some limited use of copyrighted material.

The trial is expected to resume next week, though the jury has been dismissed.

Mr. Walker said he expects the judge overseeing the case to issue a ruling on the copyrightability of the Java interfaces some time in the next couple of weeks.

If the judge rules that some or all of the interfaces can be protected with copyrights, Oracle is expected to pursue damages that could be significant.

Mr. Walker said a ruling that the interfaces are protected "would be a real threat to software development," which often relies on making legal use of others' code.

Google's general counsel said excessive patent litigation partly results from flaws in the U.S. patent system, which can issue legal protections to broad or obvious ideas.

"The goal is to make sure we have high quality patents," Mr. Walker said, "so that a patent doesn't become a lottery ticket" in court.

Google's partners have faced a number of additional infringement suits related to Android, which is developed according to an open source model that makes use of outside engineering.

Earlier this week, Google closed its acquisition of Motorola Mobility Holdings, which has a broad portfolio of thousands of patents.

Tuesday, May 8, 2012

How will verdict in Oracle-Google copyright case affect the search giant’s business?

 

May 8 (Bloomberg) -- A federal judge said Oracle Corp. can’t seek $1 billion in damages from Google Inc. for infringing copyrights when it developed Android software running on more than 300 million mobile devices because a jury couldn’t agree on whether it was “fair use.”

A jury in San Francisco yesterday found that Google, the largest Web-search provider, infringed Oracle’s copyrights for programming tools and nine lines of code. U.S. District Judge William Alsup said at this point Oracle can only seek damages on the nine lines, which by law would be at most $150,000.

“There has been zero finding of liability on copyright, the issue of fair use is still in play,” Alsup said about the 12-member jury’s decision on the programming tools. He ordered the patent phase of the case to begin today; damages will be taken up by the jury in the last phase of the eight-week trial.

Anyone can use copyrighted work without consent of the owner if it advances the public interest by adding something new or functional. Google attorney Robert Van Nest asked Alsup to declare a mistrial, saying the issue of whether the company is liable for infringement is directly linked to the question of whether it was fair use. Alsup gave each side until May 10 to submit arguments on that issue and didn’t say when he’ll rule.

“Google won the battle and it remains to be seen who won the war,” said Brian Love, an intellectual property attorney and teaching fellow at Stanford Law School.

Mobile Devices

Oracle alleged that Google, based in Mountain View, California, stole copyrights and patents for the Java programming language when it developed the Android operating system for mobile devices, which were released in 2007. Oracle acquired Java when it bought Sun Microsystems Inc. in 2010.

Oracle, the largest maker of database software, is seeking damages as well as a court order preventing Google from distributing Android unless it pays for a license.

“Oracle, the nine million Java developers, and the entire Java community thank the jury for their verdict in this phase of the case,” Deborah Hellinger, an Oracle spokeswoman, said in an e-mail. “The overwhelming evidence demonstrated that Google knew it needed a license.

‘‘Every major commercial enterprise -- except Google -- has a license for Java and maintains compatibility to run across all computing platforms,’’ she said.

Last Word

The jury’s findings may not be the last word on infringement. While the panel was asked to decide whether Google infringed parts of Java called application programming interfaces, or APIs, the ultimate decision on whether APIs are covered by copyrights will be made by Alsup later in the case. Alsup told the jury to assume APIs are copyrightable; he can decide later that they aren’t.

Alsup must also rule on Oracle’s request for a judgment in its favor that Google infringed Java copyrights and its copying wasn’t fair use. A ruling for Oracle could set aside the jury’s decision.

‘‘We appreciate the jury’s efforts, and know that fair use and infringement are two sides of the same coin,” Google spokesman Jim Prosser said in an e-mail. “The core issue is whether the APIs here are copyrightable, and that’s for the court to decide. We expect to prevail on this issue and Oracle’s other claims.”

Seven Notes

The jury found yesterday that Google didn’t infringe the documentation for the 37 APIs at issue. The panel also determined that Google infringed just 1 of 3 Java codes that were in dispute. In addition, jurors concluded that while Google proved that “Sun and/or Oracle” led the company to believe it didn’t need a license for the Java technology, Google didn’t show that it relied on that knowledge when it decided not to seek a license.

The verdict came on the fifth day of deliberations in the trial, which began April 16. The jury sent Alsup seven notes during its discussions with questions, including some about the meaning of “fair use.” A May 3 note said the panel couldn’t reach a unanimous decision. Alsup ordered jurors to continue deliberations, and after learning the panel was still at an impasse, ordered them to deliver a partial verdict.

Java is a free language. Oracle argued that the parts of Java that Google used are covered by copyrights and that the search engine company was required to pay for a license to use the technology.

Operating System

Google denied infringement, saying it developed Android from scratch and that the Java elements it used aren’t covered by copyrights. Any bits of copied Java in Android constituted fair use because Google gives Android away for free to programmers and it expanded the language’s usefulness by finding a way to build a smartphone operating system with Java, something Sun and Oracle were unable to do.

Oracle argued that the Java copying was for Google’s commercial benefit -- to increase use of Google’s search engine, which generates advertising revenue -- and added nothing new to Java.

The next phase of the case is about two Java patents Oracle alleges were infringed.

The case is Oracle v. Google, 10-3561, U.S. District Court, Northern District of California (San Francisco).

Google's driverless car now street legal in Nevada

The Google car that can drive itself is now eligible to ride on the streets of Nevada.

Technically speaking, what this means is that the car - yes, the car itself - has been issued its own driver license. In other words, the state of Nevada feels the inner workings of Google's smart vehicle contain the same capacity of driving ability and human judgment as any physical person sitting behind the wheel.

The car in question is a Prius, and has been loaded with a very special software package originally designed by Google for use in and around the company's headquarters in Mountain View, California.

But it is in Nevada where Google has been spending most of its time with the contraption as of late, since that's the state that it has been able to sweet talk into actually making it legal to take on the streets.

The software within the car uses all sorts of tools, ranging from a set of short-range radar sensors and video cameras to a persistent Internet connection that constantly scans Google Maps for road and traffic updates.

While obviously it is still a highly focused and experimental project, it could be the beginning of a ripple effect on the entire automotive industry.

Of course, Google hasn't really put the car through its full paces just yet. When it goes for a test drive, the car always has trained employees inside, who are able to override the autopilot mechanism at a moment's notice.

Interestingly enough, though, the only time that the driverless car has been in an accident is when it was being driven in manual override mode. It has never shown any safety problems when in its driverless state.

Friday, April 20, 2012

Home How To How to delete yourself from the Internet

 

You may not feel like the flotsam and jetsam that make up the facts of your life are important, but increasingly companies are using that dry data to make your every online step as indelible as if written in blood. Here's how to take back your digital dignity.

Seth Rosenblatt by Seth Rosenblatt   April 19, 2012 6:19 PM PDT

The Internet companies that power your online life know that data equals money, and they're becoming bolder about using that data to track you. If they get their way, your every online step would be not only irrevocable, but traceable back to you. Fortunately, there are some positive steps you can take to reclaim your online history for yourself.

The online privacy software company Abine, which makes Do Not Track Plus, also offers a service called DeleteMe, which removes your data from numerous tracking sites and keeps it from coming back. In an unusual gesture, though, they've made public how to do for yourself everything that DeleteMe does. Here's my take on their advice.

Be warned, though. The following are not easy instructions, and it's not because they're technically complex. They require a tenacity and wherewithal that is likely to either exhaust you, drive you borderline bonkers, or both. (And no, I haven't followed the instructions to remove myself because it's essential to my job that I can be found by strangers.)

Step 1: Prepare yourself: You're going to have to be polite.
These instructions require patience for the antics of others and determination to get the job done. It's not a bad idea to get something inanimate to take your frustrations out on, because often getting your data successfully removed or changed will require the good faith of the person you're dealing with. Things are not likely to go your way the first time around.

Step 2: Aggressively track sites that aggressively track you.
This is where the DeleteMe service comes in. They currently charge you $99 to un-track you from the tracking data clearinghouses, which in turn sell your data to others entities. You can follow Abine's list of services and do the deed yourself, and that means writing many e-mails, sending numerous faxes, and placing enough phone calls to make you wish for a time machine so you can go back to the 19th century to do violence unto Alexander Graham Bell.

One thing that isn't clear from Abine's list is that most of these data aggregators will re-add you within a few months, so I recommend at least bi-annual checks to see if they've sucked up your data again. Be tenacious, be polite, and if this is important to you, stick with it until you get what you want.

If you're concerned about privacy and people making connections between your birthday, your address, and your Social Security number, you owe it to yourself to perform at least one Web search for your name and see what comes up. You might be unpleasantly surprised.

Step 3: To protect your reputation, removal must be done from the source.
To get Google, Bing, and other search engines to notice a change in information as it is presented on the Web, the original site hosting that information must change. It doesn't matter which site is the source. It could be Facebook, or a local blog, or a gaming forum. If it's showing up in search results, it has little to do with the search engine and everything to do with the site of origin. Once that site has changed, then you'll see a change in the search results.

Getting something removed from a site is not a scientific process, even though you must be methodical about it. Ask politely, and as I noted above, you're likely to have to ask more than once and using more than one way to communicate. You likely will have to be a rake at the gates of Hell, but one that uses words like "please" and "thank you".

Look for the name of a writer, or Web site manager, and if no contact information is listed, do a WhoIs search by typing "whois www.site-name.com". Be sure to include the quotes. That will tell you who registered the site, which is a good place to start on smaller Web sites. Look for phone numbers, e-mail, and fax numbers, and follow up your initial communication.

Once you have a name, even if you can't find a phone number or e-mail, you can probably take an educated stab at one. Use a site like E-mail Format to help you out. And in your e-mail, be sure to explain clearly, concisely, and logically why your request ought to be honored.

A willingness to compromise can get you better results, too. If, for example, your initial request to fully remove your name gets refused, see if asking to have your identity anonymized will work. And if one person at the site you've contacted keeps stalling you, see if there's another you can contact instead.

Step 4: Get Google to hustle on search engine changes.
If you've been successful in changing a site, but Google is still showing the older version, you can use Google's URL Removal Tool to accelerate the process. Note that this will require a Google account, and that if you get Google to change, you're going to have to submit requests to other major search engines like Bing separately.

Step 5: Paint over the bad with good.
In cases where you can't get the site to remove the content that's negatively affecting your reputation, you can create new, fresh, positive content to counteract it. The idea is that the Positive You will bury the Negative You. Rick Santorum is a great example of how this can work in reverse, and no, I'm not going to link to it for you.

You can also use social-networking sites to bury bad news. From About.Me to Flickr to Twitter, social networks tend to rank highly in search results. By creating and maintaining accounts that use your real name, you can elevate the social networking results for your name and, ideally, drop the results you want to bury onto the second page of results. Since studies show that second-page results are viewed significantly less often than first-page, this could be a successful burying strategy.

However, a key component of this is linking the networks, so be prepared to do far more social networking than you had been.

Step 6: Go (politely) nuclear. Get a lawyer.
If you suspect something is actually defamatory, seek out legal advice. Gather your evidence, be polite and firm, and seek out someone who can guide you through the thorny legal thicket. This will also depend on your country -- England has much broader defamation and libel laws than the United States does -- and your budget.

There is no foolproof method for changing how you're presented on the Internet, whether looking at purely personally-identifiable data or the much more subjective presentation of your personal reputation. However, if these are concerns of yours, you're not alone out there, and these six steps will give you concrete actions you can take to reclaim your identity and repair how others see you.

Thursday, April 5, 2012

Hitachi Ultrastar 7K4000 4TB Enterprise Hard Drive Announced

Hitachi has announced their next generation enterprise hard drive, the Ultrastar 7K4000. The most obvious change is the 1TB capacity bump over the Ultrastar 7K3000, but this is no insignificant matter. The 33% capacity boost in the same 3.5" form factor gives enterprise users the opportunity to lower their cost/GB as the new Ultrastar 7K4000 hard drives can offer more capacity in the same footprint and weight profile, without increasing costs associated with power, data center cooling and the like. The Ultrastar 7K4000 is the first 4TB enterprise hard drive to ship, placing it alongside another first, the Hitachi Deskstar 5K4000, which is the first 4TB hard drive shipping for client use.

From a hardware perspective, the Ultrastar 7K4000 features a 7,200 RPM rotational speed, 64MB of drive cache and a SATA 6Gb/s interface; delivering up to 171 MB/s throughput in the five x 800GB platter design. The drive is Advanced Format, using 4096-byte sector size, but is backward compatible with legacy 512-byte sector size by offering built-in 512-byte emulation. With smaller steps in aerial density, moving to 512e Advanced Format lets Hitachi continue to drive progressive capacity bumps in the Ultrastar line. 

Of course Hitachi continues to offer an industry-leading five year standard warranty and a 2 million hour MTBF. 

Hitachi Ultrastar 7K4000 Specs

  • Capacities
    • 4TB - HUS724040ALE640
    • 3TB - HUS724030ALE640
    • 2TB - HUS724020ALE640
  • Interface - SATA 6Gb/s
  • Form Factor - 3.5-inch
  • Sector size - 512e
  • Max. areal density (Gbits/sq. in) - 446
  • Data buffer - 64MB
  • Rotational speed (RPM) - 7200
  • Sustained transfer rate (typical) - 171 MB/s
  • Seek time (read, typical) - 8.0ms
  • Error rate (non-recoverable, bits read) - 1 in 1015
  • Load/unload cycles (at 40° C) - 600,000
  • Targeted MTBF - 2 million hours
  • Warranty - 5 years
  • Acoustics Idle (Bels, typical) - 2.9
  • Startup current (A, max) - 1.2 (+5V), 2.0 (+12V)
  • Read/write (W) - 11.4
  • Unload idle (W) - 5.7
  • Weight (typical) - 690g
  • Environmental (operating) ambient temperature - 5 to 60C
  • Shock (half-sine wave 2 ms) - 70G
  • Vibration, random (G RMS 5 to 500 Hz) - 0.67 (XYZ)
  • Environmental (non-operating) Ambient temperature: -40 to 70C
  • Shock (half-sine wave 1ms) - 300G

Availability

The Hitachi Ultrastar 7K4000 family is now shipping in limited quantities in 2TB, 3TB and 4TB capacities. 

Clubic.com - Articles / Tests / Dossiers