Search This Blog

Friday, April 25, 2014

The FCC Doesn’t Want To Destroy Net Neutrality, But It’s Going To Anyway

As reported by GigaOm: The Federal Communications Commission doesn't want companies like Netflix or Viacom to have to pay to get their content to end users of broadband networks, but it doesn't see a way (or maybe even a reason) to ban the practice.

In a call with reporters on Thursday, FCC officials laid out the agency’s thinking on new network neutrality rules and tried to address concerns that the internet as we know it is broken.

The agency’s hope is to have new rules in place by the end of this year, and it plans to release a public document called a Notice of Proposed Rule Making (NPRM) outlining its thinking and asking questions about the new rules. It plans to release this NPRM in three weeks at its May 15 open meeting. Once the documents are released, the public will have a chance to comment on them.

What was once unreasonable discrimination now becomes commercially unreasonable

Since some of the content of that document was released Wednesday, the media and public interest groups have been concerned about what the new network neutrality framework would allow — namely, how the agency planned to ensure that ISPs won’t discriminate against the packets flowing across their networks. The answer? The agency will replace the “unreasonable discrimination” clause from the original net neutrality rules that were defeated in court this year with standards associated with “commercial reasonableness.”

It’s a subtle shift, but an important one. When the U.S. Court of Appeals gutted the Open Internet Order that set forth the net neutrality rules in January, it did so on the basis that the agency didn’t use the right justification for its rules. It tried to turn ISPs into common carriers and regulate them that way, but the court declared that the FCC couldn’t put that burden on the ISPs without changing the law or going through regulatory process that was bound to cause a fight.  

Instead we get a compromise by which the FCC attempts to honor the original intent of the 2010 Open Internet Order with a new test for discrimination. That test is the “commercial reasonableness” standard. Here’s how the FCC wants to do it.

If the devil is in the details, here are the details

First, the net neutrality rules that were gutted by the courts made a distinction between wireline broadband and wireless broadband. For a history on why, check out this post or this one. The FCC plans to keep those distinctions intact for the new rules. With this understanding, let’s hit the three main topics the FCC plans to cover, saving the most complicated element for last.

Transparency: Both the original and the new Open Internet Order make a provision for transparency, namely that network operators must share how they are managing their network traffic with the consumer. This applied to both wireline and wireless networks, so if your ISP is treating certain traffic differently, it has to tell you. The FCC’s upcoming documents also ask if this transparency could go further.

When asked if the order could require greater transparency about company networks such as how congested they might be or if ISPs are charging for prioritization or access because the market is uncompetitive, an FCC official said, “The answer is yes.” He added that the agency believes that greater transparency will help consumers and the commission determine how the broadband networks are functioning. That’s a pretty exciting promise if the FCC can wrangle that type of data from ISPs. Right now, ISPs view that data as competitive and proprietary.

An AT&T network operations center. How much transparency is enough?

Blocking: The courts struck down the original order’s anti-blocking provision that said ISPs on wireline networks couldn't block lawful traffic and wireless ISPs couldn't block competing over-the-top calling and texting services. The new FCC documents will make the case that because blocking traffic interrupts the “virtuous cycle” of broadband access — namely that people use broadband because it gives them access to a variety of services, and because broadband access is beneficial, anything that makes people less inclined to use broadband would cause harm.

This new reasoning would allow the FCC to implement a no-blocking position without resorting to calling ISPs common carriers. Another interesting tidbit here is that the FCC plans to ask about establishing a baseline of broadband service and view anything that goes below this baseline as blocking. This might seem esoteric, but in 2007 when Comcast was interfering with the delivery of BitTorrent packets, it argued that it wasn't actually blocking them. Instead it was delaying delivery so the routers in effect dropped the packets and customers couldn't access their files.


Commercial reasonableness: Here is the heart of last night’s controversy and where the FCC is walking its finest line. The agency wants to ensure that the spirit of network neutrality lives on, but legally it has to use a standard that opens the door to prioritization. The FCC even seems okay with prioritization in certain cases, with an agency official offering up the example of packets coming from a connected heart monitor as a protected class that could be prioritized over other traffic.

However, it will seek to avoid the obvious examples of Netflix having to pay an ISP to see its traffic priorititzed over another content provider’s. It will do this using the standards the FCC set forth in a 2011 cell phone roaming order that has been tested in court. As part of that order, which dictated that mobile carriers have an obligation to offer roaming agreements to other such providers on “commercially reasonable” terms, the agency created a class of behaviors that were commercially unreasonable.
  • Does this practice have an impact on future and present competition?
  • How does vertical integration affect any deals and what is the impact on unaffiliated companies?
  • What is the impact on consumers, their free exercise of speech and on civic engagement?
  • Are the parties acting in good faith? For example is the ISP involved in a good faith negotiation?
  • Are there technical characteristics that would shed light on an ISP practice that is harmful?
  • Are there industry practices that can shed light on what is reasonable?
  • And finally, a catch all that asks if there are any other factors that should be considered that would contribute to the totality of the facts?
FCC Commissioners (L to R): Commissioner Ajit Pai, Commissioner Mignon Clyburn, Chairman Tom Wheeler, Commissioner Jessica Rosenworcel and Commissioner Michael O’Rielly (Source: FCC)

Of course, one challenge with this format is that it requires an ISP to behave badly before the FCC can act. The agency said it will be on the lookout for such violations, it will accept formal complains and that it will accept informal complaints. Once a problem is registered the FCC the agency will ask about how it should handle the complaint, and whether a time limit should be imposed for a resolution.

Finally, the official acknowledged that the agency asks in its documents if there is ever a reason for a flat prohibition against certain behaviors even if an ISP isn’t a common carrier. The agency would have to make the case that paid prioritization is such a consumer or industry harm that it should be prohibited altogether. But based on the thinking and attention devoted to the commercial unreasonableness standard, as well as the heart rate monitor example, it feels like the FCC isn't keen to walk this path.

So these are the topics and questions on which the FCC will vote on May 15 and, if approved, pass for public comment. At that point the agency typically offers a 30 or 90-day comment period.


So get ready, internet: the FCC does want to know your stance on this issue.

Thursday, April 24, 2014

Apple Tech Uses Wi-Fi Access Points For Indoor Navigation, 3D Positioning

As reported by Apple Insider: While most mobile devices rely on GPS for mapping and navigation, the system only works outdoors and in range of satellite timing signals. However, new technology from Apple could extend accurate positioning indoors without need for additional hardware aside from existing Wi-Fi infrastructure.

A patent granted to Apple by the U.S. Patent and Trademark Office on Tuesday describes a robust system that combines GPS, Wi-Fi access points and onboard location databases to provide mobile devices accurate positioning data in nearly any environment.

According to Apple's U.S. Patent No. 8,700,060 for "Determining a location of a mobile device using a location database," the method employs location estimation through the successful communication with one or multiple Wi-Fi access points.

By calculating a number of factors, including access point filtering, hardware communication range and so-called "presence areas," a mobile device can narrow down its position on a map with relative precision. This includes products without GPS receivers.

One of the first steps in Apple's patent calls for a location-aware device or devices (with GPS capabilities) to transmit their position to a first Wi-Fi access point, which in turn relays the information to a server-based location system. From this data, the system can then estimate the approximate location, or "presence areas," of other devices within the communication range of the access point.

To calculate these presence areas, the system may use any number of analyses including an averaging of geographic locations based on location-aware mobile devices, signal strength of a given access point and surrounding building architecture, among other variables. Presence areas may be selected in a multi-pass process by filtering out potentials based on "popularity, stability, longevity, and freshness."

Loaded with data, the system can plot out connected mobile devices in cells on a geographic grid. Each cell acts as a container for presence areas and corresponding access points. As seen in the image above, location-aware devices are represented as black triangles that are within or nearby presence areas denoted by circles.

One way a mobile device can calculate its location is by detecting multiple presence areas and averaging distance from those close by, while discarding data from "outliers" farthest away from a given position. Following processing, the device can then display its average location on a mapping app.

Alternatively, an access point can send position information about other access points nearby, including only those that are within a mobile device's area of interest. This method of filtering is also used to approximate margin of error, which is denoted by a radius or radii extending from a focal point within a presence area.

In addition, Apple's method accounts for three-dimensional space by taking into consideration altitude data from devices supporting such GPS metrics.



From left: Multi-pass analysis, multi-pass analysis with outlier, and 3D positioning grid.

Tuesday's patent is similar to technology created by "indoor GPS" firm WifiSLAM, which Apple purchased in March 2013 for about $20 million. WifiSLAM's system relies largely on Wi-Fi signals to accurately position mobile devices while indoors and does not require GPS to operate.

Apple's patent for a Wi-Fi-based positioning system was first filed for in 2010 and credits Ronald K. Huang as its inventor.    


Wednesday, April 23, 2014

Japan’s Plan for Centimeter-Resolution GPS

As reported by IEEE Spectrum: A stranger to Tokyo could easily get lost in its urban canyons. And GPS navigation, stymied by low resolution and a blocked view of the sky, might not be much help. But that won’t be the case after 2018. Engineers at Tokyo-based Mitsubishi Electric Corp. report that they’re on track to start up the first commercial, nationwide, centimeter-scale satellite positioning technology. As well as spot-on navigation, the technology will also usher in a variety of innovative new applications, its proponents say.

Named Quazi-Zenith Satellite System (QZSS), it is designed to augment Japan’s use of the U.S.-operated Global Positioning System (GPS) satellite service. By precisely correcting GPS signal errors, QZSS can provide more accurate and reliable positioning, navigation, and timing services.

Today’s GPS receivers track the distance to four or more GPS satellites to calculate the receiver’s position. But because of the various errors inherent in the GPS system, location can be off by several meters. In using the data from QZSS to correct the measured distance from each satellite, the accuracy of the calculated position is narrowed down to the centimeter scale.

“GPS positioning can be off by as much as 10 meters due to various kinds of errors,” says Yuki Sato, a research engineer in Mitsubishi Electric’s Advanced Technology R&D Center, the prime contractor for the space portion of the project. “And in Japan, with all its mountains and skyscrapers blocking out GPS signals, positioning is not possible in some city and country locations,” he adds.

The Japan Aerospace Exploration Agency (JAXA) got the project under way with the launch of QZS-1 in September 2010. Three additional satellites are slated to be in place by the end of 2017, with a further three launches expected sometime later to form a constellation of seven satellites—enough for sustainable operation and some redundancy. The government has budgeted about US $500 million for the three new satellites, which are to be supplied by Mitsubishi. It also apportioned an additional $1.2 billion for the ground component of the project, which is made up of 1200 precisely surveyed reference stations. That part’s being developed and operated by Quazi-Zenith Satellite System Services, a private company established for this purpose.

The four satellites will follow an orbit that, from the perspective of a person in Japan, traces an asymmetrical figure eight in the sky. While the orbit extends as far south as Australia at its widest arc, it is designed to narrow its path over Japan so that at least one satellite is always in view high in the sky—hence the name quasi-zenith. This will enable users in even the shadowed urban canyons of Tokyo to receive the system’s error-correcting signals.

“Errors can be caused, for example, by the satellite’s atomic clock, orbital shift, and by Earth’s atmosphere, especially the ionosphere, which can bend the signal, reducing its speed,” says Sato.

To correct the errors, a master control center compares the satellite’s signals received by the reference stations with the distance between the stations and the satellite’s predicted location. These corrected components are compressed from an overall 2-megabit-per-second data rate to 2 kilobits per second and transmitted to the satellite, which then broadcasts them to users’ receivers.

“This is all done in real time, so compression is really important,” says Ryoichiro Yasumitsu, a deputy chief manager in Mitsubishi’s Space Systems Division. “It would take too long to transmit the original data.” 

Compression also means a practical-size antenna can be employed in the user’s receiver. In QZS-1 trial tests, Yasumitsu notes that the average accuracy is about 1.3 centimeters horizontally and 2.9 cm vertically.

This centimeter-scale precision promises to usher in a number of creative, or at least greatly improved, applications beyond car and personal navigation. Besides pointing out obvious uses like mapping and land surveying, Sam Pullen, a senior research engineer in the department of aeronautics and astronautics at Stanford, says precision farming and autonomous tractor operations will be big applications. “Unmanned aerial vehicles and autonomous vehicles in general,” he adds, “will also find centimeter-level positioning valuable in maintaining and assuring separation from other vehicles and fixed obstacles.”

In addition, the Japanese government plans to use the service to broadcast short warning messages in times of disaster, when ground-based communication systems may be damaged. As instructed by the government, the control center will transmit a brief warning message to the QZSS satellite, which will then broadcast it to users on the same frequency.

Given the range of promised applications and relatively low cost of the Japanese system compared with the €5 billion ($6.9 billion) budgeted for the EU’s Galileo, for instance, other nations will be watching and waiting to see if QZSS achieves its goals.

Google's Street View Lets You Step Back In Time

As reported by The Verge: Three years ago, a magnitude 9.0 earthquake struck off the coast of Japan and moved the entire island by 8 feet, changing the way the Earth spun on its axis in the process. The devastation of the tsunami that followed resulted in the loss of thousands of lives and billions of dollars in damage to homes, businesses, and the country’s infrastructure.  

In the aftermath, Google set out to preserve imagery it had captured prior to the disaster, including original Street View recordings that became an unintended time capsule. The company made a one-off site called Memories for the Future that let viewers see certain areas before and after the devastation. It was an unusual site considering Google’s standard operating procedure: a feverish pace of updates that erased the old with the new and never looked back.

Google’s changing that now with a feature that lets you step back in time to earlier versions of its Street View data, going back to 2006. Since then, each time the company updated Street View data, it also quietly kept the older versions. And in numerous cases, skipping between them is the difference between desolation and a sprawling metropolis, or — like in Japan’s case — vice versa.

Cherryblossoms_kyoto_japan

Interstate90_utah
Singapore_skyline
The feature, which Google is rolling out to the web version of Maps today, generally stays out of the way unless you want to go back in time. If you’re viewing a location with earlier recorded images, there’s now an hourglass and a slider in the top left of the screen that shows you the month and year. Dial it back and it sweeps to that copy stored on Google’s servers, almost as if you were clicking on a location just up the road.

The result is a kind of time warp that can show you months' and years' worth of human ingenuity, and just as quickly show it erased following a disaster or new construction project. With Street View now recording more than 6 million miles across 55 countries, there are a lot of those.

"We have built this very complicated graph of imagery through time and space," says Luc Vincent, the director of engineering for Street View. He says the option to go back and forth through time was the most requested by Google Maps users, who have been hounding the company to add it for years. This was primarily for simple things, like seeing older images of their house, school, or neighborhood. "People would say, ‘My house, can you please preserve it? Because I like it this way,’" Vincent told The Verge. "We can show you everything now."

Google is creating so much data, in fact, Vincent says the current iteration of Time Machine is intentionally dialing back what people see. The smallest interval of time you can jump to is a month, even if Google’s gone through and captured Street View recordings more frequently. That’s not a normal occurrence for most places, Vincent says, but there are places like Google’s campus, and major cities where Street View cars are recording more than once a month, sometimes even several times a week.

"Algorithms pick the best looking images to show you"
Vincent says the company’s using an algorithm designed to pick the best imagery from the data that’s collected each month. It goes through the images the company has captured and weeds out sets that tend to have a lot of motion blur, or that have particularly bad weather.

But expanding the recordings to what Vincent refers to as "slices" has opened up new avenues for the company to show off Street View imagery it once kept to itself. That includes roads with shoulders heaped with snow, drenched dark forests, or simply alternate views of familiar places. "We can show you Times Square at night," Vincent says, a first for the service that overwhelmingly prefers clear blue skies. "When we chose the image, the freshest imagery is typically the best … now you don’t have to make a choice."

One wrinkle in all this is that the physical location of roads changes over time, either by human interaction or mother nature. In the case of the movement from the 9.0 earthquake, for instance, roads and buildings that were in one place when Google was first there, were at new GPS coordinates when they went through again. That’s been preserved in Time Machine, Vincent says.

Mexico_overpass

Soumaya_museum_mexico_city


"It’s not a bug; after the earthquake, the ground shifted by 3 meters. Everything else is from the same geo-coordinates," Vincent says. "It was the same thing with Hurricane Katrina in New Orleans."

Vincent and company hope Time Machine will be more than just a way to gawk at before and after photos of disasters, and perhaps become a tool for planning travel. They imagine people using it when planning a vacation to somewhere they've never been in order to see what it looks like during that particular time of year.

Seasonalchange_norwayGoogle won’t initially offer Time Machine for indoor imagery of buildings, or on trails, something it’s captured using its special Trekker backpack. It also won’t be available from the get-go on mobile devices. Vincent made no promises on timing short of saying that the company was working on it. With that said, Street View on the go is often meant as a way to get your bearings on what’s around you now, not years ago. But that behavior, just like images of the world Google is capturing, might ultimately change.

"We’ve been driving 3D cars for more than seven years," Vincent says. "It was totally different from what it is now."

Tuesday, April 22, 2014

Why Google Is Sending Its Smartphones Into Space?

As reported by Business WeekGoogle (GOOG) and NASA are developing smart robots designed to fly around the International Space Station and eventually take over some menial tasks from astronauts with the aid of custom-built smartphones.

Since 2006, three colorful, volleyball-sized robots have been slowly floating around a 10-foot by 10-foot by 10-foot space inside the ISS. Scientists used them for research projects such as a study on the movement of liquids inside containers in microgravity environments.

NASA now plans to attach smartphones to the flying robots to give them spatial awareness that would enable them to travel throughout the space station. The Android-based phones will track the 3D motion of the robotic spheres while mapping their surroundings. “Our goal is to advance the state of 3D sensing for mobile devices in an effort to give mobile devices human-scale sense of space and motion,” says Johnny Chung Lee, a technical program lead at Google.

Before the phones are attached to the robots, a human will carry each phone around the station so that the mobile device can create a full 3D model of the facility. The robots should be able to navigate autonomously 230 miles above Earth. Within a few years, the project’s leaders say, such robots could shoot video from inside the station, conduct regular sound surveys, and take inventory of the tools on board.

“Inventory management is a huge problem at the ISS,” says Chris Provencher, a project manager for Smart Spheres at NASA’s Ames Research Center in Moffett Field, Calif. “Think of something that’s the size of a house and has thousands of tools, and they are spread all over the house—and every few months, you get a new family that has to figure out where everything is.”

It took a lot of tinkering to get Google’s spatially aware phones working in space, Provencher says. The phones’ gravity-vector algorithms had to be removed from the software, and the devices had to be adjusted to accommodate the robots’ slow speeds, which top out at about a foot per second.

“You can imagine, in the future, if you had a free flyer capable of flying outside, you could have crew control it from the inside,” Provencher says. “If the crew has to go out there eventually to do work, this can at least reduce the amount of time they have to spend outside. They can review the damage.”

The new phones are scheduled for launch into space on June 10. Google says the technology may also have applications on earth, such as in gaming and navigation assistance for the visually-impaired. “This is one step on the journey to making these algorithms more robust, more sophisticated, and to make them available to a large number of people,” says Lee.

SpaceX Brings a Booster Safely Back to Earth


As reported by MIT Technology Review: Space Exploration Technologies, or SpaceX, took a step toward making spaceflight less expensive by reusing its rocket boosters during a mission on Friday to the International Space Station. The Falcon 9 rocket used for the mission, dubbed Commercial Resupply-3, or CRS-3, was the first to fly with landing legs, and was the first to successfully perform a controlled ocean splashdown.

The launch of the third official cargo delivery mission by SpaceX to the station had been delayed from last month and again from Monday due to technical problems.

The rocket, carrying a Dragon space capsule loaded with 3,500 pounds of supplies for the space station, lifted off at just after 3:25 p.m. EST. The Dragon spacecraft reached the precise orbit needed to rendezvous with the space station on Sunday.

The mission was the first successful test of a new capability for the first stage of the Falcon 9: the ability to descend to a soft touchdown after delivering its payload to orbit. Conventional rocket boosters fall back to Earth after expending their fuel, reentering the atmosphere fast enough to disintegrate in the heat caused by friction with the air. This adds greatly to launch costs, which can top $200 million per launch, since a new rocket has to be built for each flight (see “SpaceX to Launch World’s First Reusable Rocket”).

SpaceX is already the lowest-cost provider of launch services to the U.S. government and the commercial satellite industry, with flights costing less than $100 million. The company hopes to drop costs even further with reusable rockets. SpaceX has been testing a Falcon 9 first stage in low-altitude hops at its McGregor, Texas, rocket development and testing center. The company posted a video of a test flight that took place last week with the same type of landing legs used on Friday’s orbital flight.

A camera on the second stage of the rocket captured live video of the nine SpaceX-built Merlin engines firing on the first stage of the rocket, with the plume of flame and smoke gradually expanding as the air around the vehicle thinned. At about 50 miles in altitude, and traveling at about 10 times the speed of sound some 35 miles off the Florida coast, the first-stage engines cut off as planned. As the first stage dropped away, the single Merlin engine in the second stage fired to propel the Dragon craft the rest of the way into orbit. Another camera view showed the Dragon moving away from the second stage into space with the Earth as a backdrop.

Meanwhile, a data link with the first stage confirmed that three of the nine engines on the first stage had fired as planned to slow the booster’s reëntry into the atmosphere. The plan then called for a single engine to restart at lower altitude over the Atlantic Ocean to enable a gentle splashdown. The second stage of the rocket was not designed to be recovered.



At a press conference about an hour and a half after the launch, SpaceX CEO Elon Musk confirmed that the initial data from the first-stage booster looked good. The booster had slowed to just over the speed of sound and had descended to about five miles, or about the altitude of a commercial airliner, before the terrestrial tracking station lost contact.

The latest data showed that the vehicle was not rolling. During the first attempt of a Falcon 9 first stage to safely splash down following an orbital flight, in November, the rocket spun out of control. Along with the addition of landing legs, the booster used for Friday’s flight included more powerful thrusters for countering the booster’s rolling motion.

Even so, Musk was not initially confident that the booster had landed softly on the water because of high waves.

“I think it’s unlikely that the rocket was able to splash down successfully,” he said during the post-flight press conference. At the time of the conference, he and his engineers were awaiting data from an airplane tracking the booster near the planned splashdown location, some 400 to 500 miles from Cape Canaveral. Boats that were to retrieve the booster were not able to approach the splashdown site because of the waves, which topped 15 to 20 feet.

However, Musk reported via Twitter about two hours after the press conference that the booster had indeed landed safely. “Data upload from tracking plane shows landing in Atlantic was good!” he tweeted. “Flight computers continued transmitting for 8 seconds after reaching the water. Stopped when booster went horizontal.” At last report, the crews of several boats were attempting to retrieve the booster.

SpaceX engineers are working toward the day when a Falcon 9 first stage will touch down on land. The company will attempt a touchdown on land after it demonstrates further precision splashdowns. The next big milestone following the successful recovery of a booster will be to reuse one, which could happen as early as next near. The company’s goal is not only to recover and reuse boosters, but to do so economically.

“The reuse must be both rapid and complete,” said Musk in the press conference, “like an aircraft or a car or something like that. If you have to disassemble and reassemble a car and change a bunch of parts in between driving it, it would make it quite expensive. So it’s true that we don’t just have to recover it, we have to show that it can be reflown quickly and easily with the only thing changing being reloading propellant.”

The Dragon that launched on Friday’s flight reached Space Station and was grappled by the station’s robotic arm on Sunday morning. It was then installed at the Earth-facing port on the station’s Harmony module.

This was the fifth flight of a Dragon spacecraft. SpaceX has a contract with NASA for nine more cargo deliveries to the International Space Station. The company plans 10 more flights this year, including commercial satellite launches, each one of which will offer an opportunity to recover a booster.

Monday, April 21, 2014

How Smartphones Are Increasingly Driving Our Cars

As reported by ReadWrite: Suddenly it's not so important to own a car that's "the ultimate driving machine," as opposed to "the ultimate app machine." I drive my Honda Pilot instead of my Volvo XC90 whenever I can because the Honda can connect to my smartphone over Bluetooth, plus it has a great navigation system. My Volvo has neither—all it does is drive.


Car manufacturers have picked up on this trend, recognizing that our apps are increasingly important in our car purchasing decisions:

Developers want to get in on the action, too, but there is a big problem. In the car app market, "Developers are faced by enormous fragmentation, small addressable markets and high friction in the distribution and monetization of their software," as a new VisionMobile report highlights.

In other words, the car app market is a nightmare. And yet, there's still hope.

Baby, You Can Drive My Car

The best approach to incorporate apps these days is through in-vehicle infotainment (IVI) systems. Within the IVI market, mobile laggards Blackberry (QNX Car) and Microsoft (Windows Embedded Automotive) are the leaders. But not for long.

Given how important in-car technology has become—and the sluggish pace at which it updates—more automobile manufacturers are turning to smartphones to drive innovation. While people swap out their cars infrequently, we change our smartphones every two years or so, making the smartphone ideal as a target for car app innovation. John Ellis, head of Ford's developer program, explains:
The only one that puts software on the head unit is Ford Motor Company. We don't allow you access to the head unit but through a dedicated set of APIs. In our philosophy, the phone drives the head unit, the head unit is a display. Innovation is much faster on the phone than it could be on the head unit. Certainly for us, we're very bullish on this model. People are starting to see that it just works.
As VisionMobile's report indicates, there are three different ways automakers integrate cars and smartphones:
  1. The steering wheel controls and built-in voice recognition can be used to control smartphone apps. 
  2. Reversely, smartphone voice recognition (e.g. Apple’s Siri or Google Now) can be used to control IVI apps. 
  3. The built-in infotainment system becomes a second display for smartphone apps, using APIs, or in its most extreme case, by mirroring the smartphone app on the in-car display. 

Standardizing The Link Between Car And Smartphone

Of course, this assumes there are standards for seamlessly connecting our cars to our smartphones. There are several competing standards, with Ford, who recently open-sourced its AppLink system as SmartDeviceLink, leading the pack. Others include the Car Connectivity Consortium's (CCC) Mirrorlink, an alliance of consumer electronics companies (Mirrorlink has roots in Nokia) and car makers.

As important as these car manufacturer-driven initiatives are, there's a fair amount of enthusiasm for two new platforms from Apple (CarPlay) and Google (Open Automotive Alliance, modeled after the Open Handset Alliance). Such efforts, however, may be artificially limited: Any household that mixes iOS and Android devices is going to want a car app platform that isn't fixated on a particular smartphone OS. For those households, an open platform like SmartDeviceLink, which can integrate with different smartphone OSes, may be the better choice.

The Distant Future Of App-Enabled Cars

For developers pining after the biggest addressable market, smartphones are the biggest and best target, by far. But it's not a target to salivate over today: While there were 84 million new vehicles manufactured in 2012, a small minority of these are “app-enabled” models. According to ABI Research, there were fewer than 8 million OEM-installed connected car telematics systems in 2012. 

Pushing new technologies and applications through through automakers is always going to be slow. It's far more likely that Apple and Google will find ways to go "over-the-top" and connect apps directly with cars, perhaps by connecting directly to the car through its On-Board Diagnostics (OBD-II) port. The OBD-II port has been mandatory in cars for over 10 years, which leaves the door open for app developers to connect directly with cars without awaiting formal approval from Ford, Fiat or others. At the moment, there are almost 200 apps in the Google Play store that use OBD-II. 

While OBD-II connections don't allow apps to actually control the car, it may give developers just enough access and a lot more development freedom, which are the key ingredients for fostering innovation.