Search This Blog

Friday, July 1, 2016

US Federal Gov’t Opens Investigation Into First Known Tesla Autopilot Fatality

As reported by RT.comThe National Highway Transportation Safety Administration (NHTSA) is investigating the first known fatality involving a Tesla Model S where the Autopilot system was active, the company has confirmed.

On May 7, Ohio resident Joshua Brown, 45, was in the driver’s seat of a Tesla Model S in Williston, Florida. The car’s Autopilot was engaged when a tractor-trailer made a left turn in front of the electric vehicle. Brown was killed “when he drove under the trailer,” the Levy Journal reported at the time.“The top of Joshua Brown’s 2015 Tesla Model S vehicle was torn off by the force of the collision.”
After striking the underside of the trailer, the car then continued driving until it left the road, struck a fence, smashed through two other fences and struck a power pole.
On Thursday, Tesla announced that the NHTSA had opened a preliminary evaluation on Wednesday into the performance of Autopilot in the crash. Following the company’s standard practice, it had informed the federal agency of the accident immediately after it occurred.
“Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer,” Tesla said in a blog post.
“Had the Model S impacted the front or rear of the trailer, even at high speed, its advanced crash safety system would likely have prevented serious injury as it has in numerous other similar incidents,” the company noted.
The truck driver, Frank Baressi, a 62-year-old from Tampa, was not injured in the crash. Charges against him are pending, however, according to the Levy Journal.
 The accident "calls for an examination of the design and performance of any driving aids in use at the time of the crash," the agency said in a statement.
"Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert,” Tesla said. “Nonetheless, when used in conjunction with driver oversight, the data is unequivocal that Autopilot reduces driver workload and results in a statistically significant improvement in safety when compared to purely manual driving."
Autopilot had more than 130 million miles driven successfully without a fatality, Tesla said. A fatality occurs every 94 million miles in the US and every 60 million miles globally, the company added.
The NHTSA investigation will look into whether the Autopilot system was working properly at the time of the crash.
The accident "calls for an examination of the design and performance of any driving aids in use at the time of the crash," the agency said in a statement.
Autopilot is currently in its “public beta phase,” and customers must acknowledge that before they can use the system. They are also expected to keep their hands on the wheel while it is engaged, as well as to “maintain control and responsibility” for their vehicles.
"Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert,” Tesla said.“Nonetheless, when used in conjunction with driver oversight, the data is unequivocal that Autopilot reduces driver workload and results in a statistically significant improvement in safety when compared to purely manual driving."

Thursday, June 30, 2016

AI Bests Air Force Combat Tactics Experts in Simulated Dogfights

As reported by ARSTechnicaIn the future, the US Air Force hopes to have armed drones flying in formation with human pilots, responding to their verbal and digital commands to fight the enemy and strike targets. That would require an artificial intelligence capable of interpreting commands and applying knowledge of combat tactics—something that is already being proven in a project funded by the Air Force Research Lab.

ALPHA, an artificial intelligence trained by a retired Air Force expert in air combat, was originally developed as what amounts to ultimate video game AI—an autonomous simulated enemy for use in training fighter pilots. The AI is so good that it has consistently beaten human pilots in simulated air combat—even when heavily handicapped by simulated physics. And now AFRL is investigating using ALPHA as the AI for Unmanned Combat Aerial Vehicles (UCAVs) in the physical world, potentially flying missions alongside human pilots.
Described in a paper recently published in the Journal of Defense Management, ALPHA was created using a "genetic fuzzy tree" (GFT) system. There's a lot to unpack in that term, but in short, the methodology uses genetic algorithms—code intended to mimic evolution and natural selection—to train a collection of independent but interconnected "fuzzy inference systems" (FISs). Instead of training each bit of fuzzy logic independently for a given task, as is normally done in fuzzy systems, the genetic algorithm "is utilized to train each system in the Fuzzy Tree simultaneously," lead researcher Nick Ernest, CEO of Psibernetix Inc. (the company that developed ALPHA) and his co-authors wrote in the paper. "Each FIS has membership functions that classify the inputs and outputs into linguistic classifications, such as 'far away' and 'very threatening', as well as if-then rules for every combination of inputs, such as 'If missile launch computer confidence is moderate and mission kill shot accuracy is very high, fire missile'. By breaking up the problem into many sub-decisions, the solution space is significantly reduced."
This, Ernest said, closely mirrors how humans make decisions on the fly. "Only considering the relevant variables for each sub-decision is key for us to complete complex tasks as humans," he said. "So, it makes sense to have the AI do the same thing."
The GFT approach and ALPHA were developed by Ernest during his doctoral research in aerospace engineering at the University of Cincinnati during a three-year fellowship funded by the Dayton Area Graduate Institute and the Air Force Research Lab. The tools for creating ALPHA incorporated input from retired Air Force Colonel Gene Lee, a former Air Force air combat instructor, and research and technology from AFRL and from the University of Cincinnati's aerospace professor, Kelly Cohen, and fellow doctoral student Tim Arnett. Before ALPHA earned its wings in simulated combat with humans in AFRL's Advanced Framework for Simulation, Integration, and Modeling (AFSIM) environment, the development team generated scores of random versions of ALPHA that were pitted against a version tuned with human input, running on a $500 desktop PC. The winning versions of the AI were then "bred" with each other, with the best-performing traits carried on to the next generation of ALPHA code. These were then let loose on each other to simulate natural selection. In the end, through subsequent generations of pitting AI versions against each other, only one remains—the alpha ALPHA, so to speak.
In October, Lee took on ALPHA himself as the first human opponent. The former fighter combat instructor scored no kills against ALPHA's simulated aircraft—in fact, every simulated engagement ended in him being shot down. "I was surprised at how aware and reactive it was," Lee said. "It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed." While most AIs he had encountered in simulations before could be "beat up on" by experienced pilots, he said, "until now, an AI opponent simply could not keep up with anything like the real pressure and pace of combat-like scenarios." After flying multi-hour simulated missions with ALPHA as the opponent, he said, "I go home feeling washed out. I'm tired, drained and mentally exhausted. This may be artificial intelligence, but it represents a real challenge."
It's not like ALPHA has been given any special advantages in these simulations. "The [simulated] aircraft have the exact same mechanical capabilities with respect to their mover models," Ernest told Ars in an e-mail. But ALPHA even won when it was given an inferior aircraft to fly—deliberately handicapped with lower speed, shorter missile range, and inferior sensors. "ALPHA has even occasionally been given a lesser G-Limit than the opponent," Ernest said, and it still won. ALPHA has also controlled multiple simulated aircraft at the same time in coordinated combat.
Going forward, AFRL and Psibernetix plan to continue to train ALPHA against other pilots and improve how close to the real world its simulated environment is by making the aerodynamic and sensor models of the simulation more realistic. "The goal is to continue developing ALPHA, to push and extend its capabilities," Ernest said. Eventually, AI like ALPHA could be trained to work in teams with human pilots—and keep humans from making mistakes in combat. According to the researchers, ALPHA can act on sensor data to make or change decisions about combat for up to four aircraft in less than a millisecond, moving aircraft to evade missiles and fire weapons while a human pilot essentially manages the overall air battle at a higher level.

Tuesday, June 28, 2016

Everybody is Missing the Point of the Tesla-SolarCity Deal

As reported by YahooLast week, Tesla announced that it was pursuing an acquisition of SolarCity, the troubled solar-panel leasing company that is run by Tesla CEO Elon Musk's cousin Lyndon Rive.
And on Monday, SolarCity said it had formed a committee of independent directors to evaluate the deal.
That is because Musk is the largest single shareholder of both companies and the chairman of SolarCity.
Holy conflict of interest!
Actually, Musk and Rive have said they will recuse themselves from voting on the deal.
Anyway, SolarCity is a $3 billion bite for Tesla in an all-stock transaction that would add — brace yourself — over $3 billion in debt to Tesla's balance sheet.

If this looks like a SolarCity bailout — the company has seen its market cap, now $2.25 billion, sawed in half since last year — then that's because it is.
The deal might look outwardly vexing, and much of the analysis has suggested that Tesla is doing something wrong here, but it's not. It's actually following through on promises that Musk has made over and over for the past half decade.
If you've been paying attention, then you could have seen this one coming, though you probably thought Tesla and SolarCity would become closer partners and not that Tesla would take charge.
So why is Tesla doing this?
It certainly doesn't seem to be to enhance shareholder value. Tesla stock dived when the news broke.

What shareholder value?

But it has never been clear that Tesla cares much about shareholder value.
Rather than please investors or vindicate the ratings and target prices of Wall Street analysts, the electric-car maker is playing a longer game. The stock just helps it get there by providing a way to raise capital, as it did recently and also last year, and to be used as a form of super currency to sustain Musk's vision of a world freed from fossil-fuel dependency. gallery
.

SolarCity is integral to that vision, even if it's Musk's most under-the-radar interest — it's hard to compete with the car of the future and a SpaceX mission to Mars.
And that's what everybody is missing here.
With this bid, Tesla is trying to become what Musk probably wanted it to be all along: an integrated holding company providing global-warming solutions.


If the SolarCity deal goes through, then Tesla will be a carmaker; a battery maker, thanks to the Gigafactory being built in Nevada; an energy storage company, thanks to Tesla Energy, unveiled last year and selling residential battery packs; and a solar finance firm.
Put all that together under one roof and you get a company that can sell or lease you a zero-emission, off-the-grid lifestyle.

Plus, Musk rescues his SolarCity investment in the process. But there's nothing surprising here in the master plan. Musk has always thought of the companies he's involved with as a single mega investment. It makes sense to use the stock of one to keep another one going.
Yes, all that debt could eventually be a major problem for Tesla. It is already burning cash like crazy as it tries to go from building 50,000 cars a year in 2015 to building 500,000 annually by 2018. And SolarCity is incinerating cash.
So that debt load that Tesla would be taking on isn't going anywhere. Shareholders could rightly accuse Musk of hanging a $3 billion anchor around Tesla's neck.
Of course, shareholders could also vote against the deal, or if they don't think loading Tesla up with another company's debt is a good idea, then they can sell their shares.
If Tesla can really save the world, then from Musk's perspective, taking on all that debt has been worth it.

Why the Rise of Driverless Cars Has Got Detroit Spooked

As reported by MIT Technology ReviewAre modern car companies lumbering dinosaurs or fleet-footed innovators looking toward the next big, disruptive idea? At the moment, they seem to be both—while they boast huge revenues and have posted record profits in recent years, firms like GM and Ford also appear to feel that, on some level, the sun is setting on their business model. And they are scrambling to reinvent themselves as firms that provide all sorts of transportation options, from ride-hailing services to cars that drive themselves.

As the cover story in the most recent issue of Fortune puts it:
For 125 years U.S. auto companies made their money on the manufacture of motor vehicles. Now they must be in the business of ride-hailing apps, shuttle buses, 3D maps, and computers on wheels that drive themselves. They’re no longer automotive companies either—they’re now calling themselves “mobility” companies.
This change has come about with dizzying speed—a decade ago, robotic cars only existed in research projects funded by DARPA. Most of them barely worked. Today they represent such a threat to the car industry’s status quo that Ford’s president and CEO, Mark Fields, has said the company must “disrupt itself” if it is to survive. Earlier this year GM bought driverless-car startup Cruise Automation for $1 billion. An avalanche of deals ensued:
In May, Toyota struck a partnership with Uber, Volkswagen invested $300 million in ride-hailing company Gett, Apple poured $1 billion into China’s Didi Chuxing, and Google partnered with Fiat Chrysler to outfit 100 Pacifica minivans with self-driving technology.

But drawing a line between nervous car company executives and a wholesale change in how the average driver approaches owning and driving a car could be a bit simplistic. 
Types of automation like collision avoidance and adaptive cruise control are indeed trickling into midrange cars. Luxury models come with self-parking features, and if you’re brave enough to engage Tesla’s Autopilot, you can experience the (sometimes scary) cutting edge of driverless technology that’s already available to consumers.
But the gap between far-sighted entrepreneurs and everyday drivers is large. One startup mentioned in the Fortune piece, called Zoox, is apparently building “bidirectional robo-taxis” that the company’s founder says aren’t cars at all, but “what comes after the car.” Zoox is apparently raising north of a quarter-billion dollars to make its … conveyance a reality. This kind of “post-car” outlook is popular in Silicon Valley, but people may not be ready for such visionary modes of transportation:
In May, Google posted job listings for test drivers in Arizona, which tech bloggers painted as a dream job. Who wouldn’t want to make $20 an hour sitting in a car doing nothing for eight hours a day? But the social media reaction from nontechies was a glimpse into the public’s fears of robot cars. “You’re gonna have to pay more to get me in that tin can with a mind of its own,” wrote one Facebook commenter.
The arrival of autonomous cars can't come soon enough, given their very real promise for reducing fatalities, injuries, and property damage. But they have a long way to go yet before they have truly arrived.

Friday, June 24, 2016

How do You Teach an Autonomous Vehicle When to Hurt its Passengers?

As reported by The Verge:  How do you teach a car when to self-destruct? As engineers develop and refine the algorithms of autonomous vehicles that are coming in the not-so-distant future, an ethical debate is brewing on what happens in extreme situations — situations where a crash and injury or death are unavoidable.
A new study in Science, "The Social Dilemma of Autonomous Vehicles," attempts to understand how people want their self-driving cars to behave when faced with moral decisions that could result in death. The results indicate that participants favor minimizing the number of public deaths, even if it puts the vehicles’ passengers in harm’s way — what is often described as the "utilitarian approach." In effect, the car could be programmed to self-destruct in order to avoid causing injury of pedestrians or other drivers. But when asked about cars they would actually buy, participants would choose a car that protects them and their own passengers first. The study shows that morality and autonomy can be incongruous: in theory, we like the idea of safer streets, but we also want to buy the cars that keep us personally the safest.
IN THEORY, WE LIKE THE IDEA OF SAFER STREETS
These are new technological quagmires for an old moral quandary: the so-called the trolley problem. It’s a thought experiment that’s analyzed and dissected in ethics class. "In the trolley problem, people face the dilemma of instigating an action that will cause somebody’s death, but by doing so will save a greater number of lives," Azime Chariff, one of the study’s three authors and an assistant professor of psychology at the University of California Irvine, says. "And it’s a rather contrived and abstract scenario, but we realize that those are the sorts of decisions that autonomous vehicles are going to have to be programmed to make, which turns these philosophical thought experiments into something that’s actually real and concrete and we’re going to face pretty soon."

In the study, participants are presented with various scenarios such as choosing to go straight and killing a specified number pedestrians or veering into the next lane to kill a separate group of animals or humans. Participants choose the preferred scenario. In one example: "In this case, the car will continue ahead and crash into a concrete barrier. This will result in the deaths of a criminal, a homeless person, and a baby." The other choice: "In this case the car will swerve and drive through a pedestrian crossing in the other lane. This will result in the deaths of a large man, a large woman, and an elderly man. Note that the affected persons are flouting the law by crossing on the red signal."
The questions — harsh and uncomfortable as they may be in outcome — reflect some of the public discomfort with autonomous vehicles. People like to think about the social good in abstract scenarios, but when it comes time to actually buy a car, they are going to protect their occupants, the data shows.
WHEN I TOOK THE TEST I FOUND THAT I SAVED MORE WOMEN AND CHILDREN
The MIT Media Lab has created a related website, the Moral Machine, that allows users to take the test. It’s intended to help aide the continued study of this developing subject area — an area that will quickly become critical as regulators seek to set rules around the ways cars must drive themselves. (When I took the test, I found that I saved more women and children — not much different than those who got first dibs on the Titanic lifeboats.)
The authors believe that self-driving is unlike other issues of automated transportation, such as airport trams or even escalators, because they are not competing with other cars on the road. Another author, John Bonnefon, a psychological scientist working at France’s National Center for Scientific Research, told me there is no historical precedent that applies to the study of self-driving ethics. "It is the very first time that we may massively and daily engage with an object that is programmed to kill us in specific circumstances. Trains do not self-destruct, no more than planes or elevators do. We may be afraid of plane crashes, but we know at least that they are due to mistakes or ill intent. In other words, we are used to self-destruction being a bug, not a feature."

But even if programming can reduce fatalities by making tough choices, it’s possible that putting too much weight on moral considerations could deter development of a product that might still be years or decades away. Anuj K. Pradhan is an assistant research scientist in UMTRI’s Human Factors Group who studies human behavior systems. He thinks these sorts of studies are important and timely, but would like to see ethical research balanced with real-world applications. "I do not think concerns about very rare ethical issues of this sort [...] should paralyze the really groundbreaking leaps that we are making in this particular domain in terms of technology, policy and conversations in liability, insurance and legal sectors, and consumer acceptance," he says.
It’s hard not to compare a programmed car with what a human driver would do when faced with a comparable situation. We constantly face moral moments in our everyday lives. But a driver in a precarious situation often does not have time to consider moral outcomes, while a machine is more analytical. For this reason, Bonnefon cautions against drawing direct comparisons. "Because human drivers who face these situations may not even be aware that they are [facing a moral situation], and cannot make a reasoned decision in a split-second. Worse, they cannot even decide in advance what they would do, because human drivers, unlike driverless cars, cannot be programmed."
"HUMAN DRIVERS, UNLIKE DRIVERLESS CARS, CANNOT BE PROGRAMMED"
It’s possible that one day, self-driving cars will be essentially perfect, in the same way an automatic transmission is now more precise than even the best manual. But for now, it’s unclear how the public debate will play out as attitudes shift. These days, when Google’s self-driving car crashes, it makes headlines.
What everyone seems to agree on is that the road ahead will be muddled with provocative moral questions, before machines and the market take over the wheel from us. "We need to engage in a collective conversation about the moral values we want to program in our cars, and we need to start this process before the technology hits the market," Bonnefon says.

Thursday, June 23, 2016

This Cognitive Autonomous Vehicle Is Powered By You And IBM Watson

As reported by ForbesSelf driving, cognitive and powered by IBM Watson, a new self-driving vehicle called Olli, is expected to hit public roads later this year in Washington DC and Miami Dade County.
Local Motors, the company that created the first 3D printed car, developed Olli (more like a very short bus) to carry up to 12 people and fill transportation gaps in a city’s transit system or transport employees across corporate campuses more efficiently. Olli is fueled by your collective brains and allows for natural interaction with the vehicle using IBM Watson’s IoT cognitive computing capabilities.
Olli has more than 30 sensors which are embedded in the vehicle that collect transportation data as the vehicle is in motion. Using cognitive computing, Olli can analyze and learn from that collected data. New sensors can be continuously added and adjusted as passenger needs and local preferences are identified. Olli’s knowledge grows based on the interaction with its passengers.

Here’s how Olli works via Watson. A passenger can ask a question or specific vehicle functions on entering the vehicle. By example, “Olli can you take me to the Lincoln Memorial” or “how does this feature work?”. Passengers will also be able to ask for recommendations on local destinations or historical sites based on analysis of personal preferences. Olli learns as it moves and as each passenger asks for destinations it stores and remembers that for the next person.  
Local Motors hopes that Olli can help reduce individual driving at the same time increase the efficiency of rides-on-demand which can help reduce the carbon footprint of cities and corporate or academic campuses.

Wednesday, June 22, 2016

Acura Built an Electric NSX to Tackle Pikes Peak

As reported by Engadget: Acura's eagerly anticipated next-gen NSX is finally going into production for 2017, but the car will hit the road before then -- sort of. The company will race a highly modified version at the Pikes Peak hill climb event on June 26th. However, unlike the (mostly) gas-powered consumer model, The "EV Concept" race vehicle will be powered by four electric motors, one on each wheel. That means it looks roughly the same as a production NSX (other than the scoop and wing), but the custom EV drive train is completely different and built for racing.

The company hasn't said how much power the motors make, but they will give the car something called "four-wheel torque vectoring." That means engineers can dial a precise amount of power to each wheel, making it perform better in corners and when accelerating. The car also uses regenerative braking to extend the battery life.



Electric vehicles are ideal for Pikes Peak, since they aren't affected by the 14,000 foot elevation that chokes gas-powered engines. Last year, Rhys Millen raced a modified eo PP03 up the track in 9:07.222, a time that would have won the gas-powered unlimited class in every year but 2013 and 2014. (Sebastien Loeb holds the unlimited record at 8:13.878, a time set in 2013).

The Acura NSX production car, set to arrive next year for around $150,000, is an odd vehicle. It has a turbocharged 500HP V6, but uses three small electric motors to boost acceleration and cut turbo lag. That gives it stunning acceleration, but purists are worried. The original NSX weighed just 2,712 pounds and was loved for its lack of excess, but the new model reportedly tips the scale at 3,800 pounds, thanks to the hybrid powertrain.