As reported by Engadget: As a rule, self-driving car tests tend to be limited to the country where they started. But that's not how people drive -- what happens when your autonomous vehicle crosses the border? Continental and Magna plan to find out. They're planning to pilot two driverless vehicles all the way from southeastern Michigan to Sarnia, Ontario, making this the first cross-border test of its kind. The machines won't be in complete control for the entire route, but they'll use a combination of cameras, LiDAR and radar to take over when they can, including two key border crossings (the Detroit-Windsor Tunnel and the Blue Water Bridge).
This isn't the first autonomous driving-related agreement involving Michigan and Ontario, but it's an important one: it'll explore rules and regulations in addition to the usual self-driving data collection.
As you might guess, tests like this will be vital to making autonomy a practical reality. Driverless vehicles need to know how to adapt to changing road rules, such as different signage and units of measurement. While this isn't the greatest challenge, it has to be overcome if you're ever going to embark on cross-border shopping trips without touching your steering wheel.
As reported by The Verge: The question of whether, and to what extent, cars are like phones has been gently bubbling along over the past few years as we’ve watched the nexus of innovation shifting from the technology we carry in our pocket to that which carries us along the roads. It’s obvious now that cars will experience transformative change like phones did before them, but how many parallels between the two are really there?
If you want to see a company doing its utmost to reduce the complexities of a car down to a familiar phone-like interface, you need look no further than Tesla and its new Model 3. This is the most affordable electric car in Tesla’s stable and it has the most aggressively stripped-down interior — from any manufacturer. There’s a 15-inch touchscreen in the middle of the dash and a couple of buttons on the steering wheel and that’s it. Given how Apple’s iPhone was the phone that made this “one touchscreen to rule them all” interface paradigm familiar in the first place, I thought it’d be fitting to look at the similarities between the iPhone and this new Model 3, as a proxy for answering how similar cars and phones have become.
BUTTONS BE DAMNED
Before the iPhone, phones had as strong an affinity to physical keyboards as laptops still do. The first Android prototypes basically looked like BlackBerrys, and the most advanced smartphones from Nokia (like the 9500 Communicator above) were awkward attempts at marrying the familiar with the new. That stage of evolution is where we find ourselves with car interfaces today: embracing new technology and touch interaction, but only partially. Audi’s latest A8 luxury sedan is a good example of the trepidatious transition away from the traditional button interfaces. Like Nokia before it, Audi is obviously struggling to abandon buttons entirely.
Tesla’s Model 3 is as clean a departure from buttons as the original iPhone was. One touchscreen, all your information and interactions on it. You’ll be adjusting everything, right down to the wing mirrors, via that display, though Tesla retains a couple of basic physical controls on the steering wheel just as Apple did with the iPhone’s home button. In essence, the Model 3 turns the car’s entire human interface into software. It’s alien to us as a car interior, just as it was once alien as a phone interface — how do you speed-dial anyone without buttons — but Tesla is betting that we’ll adapt to it over time just as we did with phones.
MAKING TECHNOLOGY MORE AFFORDABLE
It may seem perverse to allege that the iPhone, which has always been presented and perceived as a luxe phone purchase, has been a democratizing device. But if you think of it as lowering the price of an Apple computer from the MacBook’s four figures down to three, then it has indeed widened access to the latest technology. Still an expensive purchase for many, but much less so than previously.
That’s the position of the Tesla Model 3 today: it’s not the cheapest or most practical car you can purchase, but it brings Tesla’s advanced technologies like Autopilot down to their lowest price. Another similarity: both the iPhone and the Model 3 started rolling out slowly and with very limited initial quantities. That might seem coincidental, but it may also be read as evidence of how aggressive each company has been in pushing its technology to the masses.
OVER-THE AIR SOFTWARE UPDATES
This isn’t solely a Model 3 feature, but Tesla has pioneered over-the-air (OTA) updates to its cars much in the same way that Apple made OTAs a feature of iPhone ownership. Phone and car software both used to be static, unchanging things, but with faster innovation, fast updates are required. What is novel about the Model 3 is that it streamlines the software even further by limiting itself to the one screen. Outside of fantastical concepts, this is the closest that a car’s interface has gotten to the single-screen software environment we know from PCs and their mobile counterparts. Imagine how much easier iterating on the Model 3’s user interface will be: software designers will only have to code for one screen instead of the usual multiplicity of screens and physical controls inside cars.
Simplification isn’t easy, and Tesla is setting itself a non-trivial challenge in trying to create software equivalents for all the various buttons and dials scattered across a typical car’s interior. But in standardizing around this one display and a consistent hardware platform, the company can refine and improve its offering as fast as any mobile operating system can. This is the truest application of smartphone software development to cars that we’ve yet seen.
CHARISMATIC SALESMAN CEO
Tesla CEO Elon Musk has a sometimes-goofy presentation style that’s a million miles from Steve Jobs’ polished sales pitch, but it’s undeniable that both have been massively influential ambassadors for their brands. Musk has made Tesla cool, he’s made it a talking point in general conversation. Even with its so far limited sales, Tesla has grown to be a byword for electric vehicles as a whole, much in the same way as the iPhone has been for the smartphone category. BMW, Nissan, and many others are also making EVs, but it’s only with Tesla that you can say “I’m getting a Tesla” and need to explain nothing more beyond that.
The Model 3, and Tesla as a company, finds itself in a very decisive, precarious moment. The company needs to have the faith of its customers as it works to fulfill orders (and overcome any unforeseen stumbles that may arise) and that’s where a charismatic leader can be very helpful. Before Tesla is able to deliver actual cars to people, all it can sell them is a vision, and the 325,000 initial preorders for the Model 3 have shown that Musk is as capable of doing that as Apple’s Jobs was.
POTENTIAL TO CHANGE THE WORLD
Silicon Valley businesses can often seem smug and self-aggrandizing, however they do have a record of producing things that have been culturally and socially transformative on a global scale. Take your pick from the iPhone, Google search, Facebook, or the original silicon chips that gave the area its nickname. It’s no overstatement, then, to say that the Model 3 “could be Tesla’s iPhone moment,” as Recode’s Johana Bhuiyan argues. It could be the new mass-market product that overhauls the entire category it’s entering and resets expectations.
At first, exactly as with the iPhone, the Model 3 is only resetting the interface paradigm by dispatching the buttons in favor of a streamlined touch UI. What we see today is the foundation for what Musk and his team at Tesla want to achieve: the future they envision is one where you wouldn’t worry about being distracted from driving because you wouldn’t have to drive. And when you do choose to put your hands on the wheel, voice controls and automated settings would keep the need for visual distractions to a minimum. All those are things that Tesla would look to develop over the longer history of the Model 3, much as Apple’s most transformative changes — the App Store and the iSight camera — came in the years after the initial iPhone launch.
It’s certainly too early to know if Tesla will succeed, but if it does, it will be because of the Model 3. Like the iPhone before it, this car breaks with most of the conventions of its category and opts for a distinctly technological approach to a product that has until now been mostly defined by its mechanical qualities.
From Mark Burnett @ BearingPoint: Artificial Intelligence (AI) is an increasingly essential component in many products and services. If its not in your products and services, it may well be in your competitors. There are lots of kinds of AI and even more ways of applying it to business and technical problems.
This paper on Artificial Intelligence gives a practical assessment of the state of development of AI and Machine Learning along with examples of its use and practical suggestions for what you need to consider if you want to use AI to enhance your business, products or services.
Advances in computer power, elastic cloud and the ability to quickly deploy thousands of compute instances running neural nets and other kinds of machine learning on big data cost effectively offers huge potential for automation, prediction, and generation of insights from patterns in the data that humans fail to see.
This is a paper for those wanting to find a way to make a difference now and as such, it encourages visionaries and solution designers to forget the sci-fi Utopian view of AI as a general human level intelligence for now and start by embracing the engineering problems of matching the various kinds of AI to the business problems and jobs-to-be-done they are suited for.
This is a call to tool-up, exploit the cloud, understand the different AI frameworks and platforms, and bring in the knowledge and expertise to build the right kinds of AI/ML/cognitive computing to solve business problems in practical future-proof ways that create competitive advantage from the outset.
As reported by MIT Technology Review: The race to build mass-market autonomous cars is creating big demand for laser sensors that help vehicles map their surroundings. But cheaper versions of the hardware currently used in experimental self-driving vehicles may not deliver the quality of data required for driving at highway speeds.
Most driverless cars make use of lidar sensors, which bounce laser beams off nearby objects to create 3-D maps of their surroundings. Lidar can provide better-quality data than radar and is superior to optical cameras because it is unaffected by variations in ambient light. You’ve probably seen the best-known example of a lidar sensor, produced by market leader Velodyne. It looks like a spinning coffee can perched atop cars developed by the likes of Waymo and Uber.
But not all lidar sensors are created equal. Velodyne, for example, has a range of offerings. Its high-end model is an $80,000 behemoth called HDL-64E—this is the one that looks a lot like a coffee can. It spits 64 laser beams, one atop the other. Each beam is separated by an angle of 0.4° (smaller angles between beams equal higher resolution), with a range of 120 meters. At the other end the firm sells the smaller Puck for $8,000. This sensor uses 16 beams of light, each separated by 2.0°, and has a range of 100 meters.
To see what those numbers mean, look at the video below. It shows raw data from the HDL-64E at the top, and the Puck at the bottom. The expensive sensor’s 64 horizontal lines render the scene in detail, while the image produced by its cheaper sibling makes it harder to spot objects until they’re much closer to the car. While both sensors nominally have a similar range, the lower resolution of the Puck makes it less useful for obstacles until they are much closer to the vehicle.
At 70 miles per hour, spotting an object at, say, 60 meters out provides two seconds to react. But when traveling at that speed, it can take 100 meters to slow to a stop. A useful range of somewhere closer to 200 meters is a better target to shoot for to make autonomous cars truly safe.
That’s where cost comes in. Even an $8,000 sensor would be a huge problem for any automaker looking to build a self-driving car that a normal person could afford. Because of this, many sensor makers are readying new kinds of solid-state lidar devices, which use an array of tiny antennas to steer a laser beam electronically instead of mechanically. These devices promise to be easier to manufacture at scale and cheaper than their mechanical brethren. That would make them a palatable option for car companies, many of which are looking to build autonomous cars for the mass market as soon as 2021.
But some of these new solid-state devices may currently lack the fidelity required for self-driving cars to operate safely and reliably at highway speeds.
The French auto parts maker Valeo, for example, claims to have built what it says is the world’s first laser scanner for cars that’s ready for high-volume production, the SCALA. It features four lines of data with an angular resolution of 0.8°. Automotive News previously reported that Valeo will provide the lidar sensor used in the new Audi A8, though at the time of writing Audi declined to confirm this and Valeo didn’t respond to a request for details. The new A8 is the first production car to feature lidar and can drive itself—but only in heavy traffic at speeds less than 37 miles per hour.
In June, Graeme Smith, chief executive of the Oxford University autonomous driving spinoff Oxbotica, told MIT Technology Review that he thinks a trade-off between data quality and affordability in the lidar sector might affect the rate at which high-speed autonomous vehicles take to the roads. “Low-speed applications may be more affordable more quickly than higher-speed ones,” he explained. “If you want a laser that’s operating over 250 meters, you need a finely calibrated laser. If you’re working in a lower-speed environment and can get by with 15 meters’ range, then you can afford [to use] a much lower-cost sensor.”
Austin Russell, the CEO of lidar startup Luminar, says his company actively chose not to use solid-state hardware in its sensors, because it believes that while mechanically steering a beam is more expensive, it currently provides more finely detailed images that are critical for safe driving. “It doesn't matter how much machine-learning magic you throw at a couple of points [on an object], you can’t know what it is,” he says. “If you only see a target out at 30 meters or so, at freeway speeds that’s a fraction of a second.”
The standard of solid-state devices available for use in vehicles is likely to improve over time, of course. LeddarTech, for instance, is a Canadian firm based in Quebec that specializes in solid-state devices and is producing reference designs that auto parts makers will then use as a model to produce hardware at scale. The firm’s Luc Langlois says that one of its designs, estimated to cost a car company around $75 to produce, will feature either eight or 16 lines and be available in December 2018. A higher-resolution version, with 64 lines and estimated to cost around $100, will follow about a year later.
For its part, Velodyne has promised to build a solid-state lidar device, which John Eggert, director of automotive sales and marketing, says will use 32 laser lines and boast a range of 200 meters—though he won’t elaborate on the resolution provided by the hardware. And Israeli startup Innoviz Technologies claims to be making a $100 unit with a range of 200 meters and an angular resolution of 0.1°. Both firms have promised to put those sensors into production sometime in 2018, though the scale of production and availability remain unknown. Quanergy, a Silicon Valley startup, is building its own $250 solid-state device due to go into production later this year, but at the time of this writing did not respond to multiple requests for detailed specifications.
Oxbotica’s Smith thinks that automakers might just have to wait it out for a cheap sensor that offers the resolution required for high-speed driving. “It will be like camera sensors,” he says. “When we first had camera phones, they were kind of basic cameras. And then we got to a certain point where nobody really cared anymore because there was a finite limit to the human eye.” Makers of autonomous cars might find that lidar sensor performance levels out, too—eventually.
As reported by MIT Technology Review: Autonomous cars often proudly claim to be fitted with a long list of sensors—cameras, ultrasound, radar, LIDAR, you name it. But if you’ve ever wondered why so many sensors are required, look no further than this picture.
You’re looking at what’s known in the autonomous-car industry as an “edge case”—a situation where a vehicle might have behaved unpredictably because its software processed an unusual scenario differently from the way a human would. In this example, image-recognition software applied to data from a regular camera has been fooled into thinking that images of cyclists on the back of a van are genuine human cyclists.
This particular blind spot was identified by researchers at Cognata, a firm that builds software simulators—essentially, highly detailed and programmable computer games—in which automakers can test autonomous-driving algorithms. That allows them to throw these kinds of edge cases at vehicles until they can work out how to deal with them, without risking an accident.
Most autonomous cars overcome issues like the baffling image by using different types of sensing. “LIDAR cannot sense glass, radar senses mainly metal, and the camera can be fooled by images,” explains Danny Atsmon, the CEO of Cognata. “Each of the sensors used in autonomous driving comes to solve another part of the sensing challenge.” By gradually figuring out which data can be used to correctly deal with particular edge cases—either in simulation or in real life—the cars can learn to deal with more complex situations.
Tesla was criticized for its decision to use only radar, camera, and ultrasound sensors to provide data for its Autopilot system after one of its vehicles failed to discern a truck trailer from a bright sky and ran into it, killing the driver of the Tesla. Critics argue that LIDAR is an essential element in the sensor mix—it works well in low light and glare, unlike a camera, and provides more detailed data than radar or ultrasound. But as Atsmon points out, even LIDAR isn’t without its flaws: it can’t tell the difference between a red and green traffic signal, for example.
The safest bet, then, is for automakers to use an array of sensors, in order to build redundancy into their systems. Cyclists, at least, will thank them for it.
As reported by Engadget: Elon Musk's latest venture, The Boring Company, has certainly been a source of amusement. Now, the billionaire visionary has tweeted that he's received verbal government approval to build an New York-Philadelphia-Baltimore-DC Hyperloop, which will get you from New York to Washington, DC, in 29 minutes. It currently takes approximately two and a half hours to travel between the two cities on Amtrak's Acela Express.
The company is currently working on tunnels aimed at relieving congestion in LA. Musk tweeted that the New York-DC Hyperloop would be constructed in parallel (with city center to city center service, including up to a dozen entry/exit points per city), followed by a Los Angeles-San Francisco loop. He also envisions a Texas (Dallas-Houston-San Antonio-Austin) Hyperloop as a possibility. That's some pretty far-off planning.
But for that to happen, he'll probably have to have some sort of conversation with the cities themselves, and so far that doesn't seem to have taken place. While a White House spokesperson told us that it believes innovation often comes from the "ingenuity and drive of the private sector," and that it has had "positive conversations" with Musk, it appears that those conversations didn't include representatives from the cities that would be affected by the Hyperloop. The New York City mayor's deputy press secretary Ben Sarle told Engadget, "Nobody in City Hall, or any of our city agencies, has heard from Mr. Musk or any representatives of his company."
Similarly, the Washington DC mayor's deputy press secretary Susana Castillo said, "This is the first we heard of it too, but we can't wait to hear more." Philadelphia's deputy communications director Mike Dunn told us in an email that Musk hadn't contacted any Philadelphia officials about the Hyperloop and that there are many hurdles for this "unproven" technology before it can be implemented.
Whether Baltimore has been included in these talks is unclear, but according to a statement made by its mayor, Catherine Pugh, the city is excited about the prospect. "I am excited to hear about Elon Musk's underground Hyperloop connecting New York to Washington, DC through Baltimore. If his plan becomes a reality it has tremendous potential to create new opportunities for Baltimore and transform the way we link to neighboring cities," said Pugh.