Search This Blog

Tuesday, December 26, 2017

Elon Musk Shows Off the Tesla Roadster that SpaceX will Send Beyond Mars

As reported by The Verge: Weeks after announcing that he plans to send an original Tesla Roadster to space atop a Falcon Heavy rocket, Elon Musk has released photos of the car being prepped for launch at SpaceX headquarters. The series of photos, posted to Instagram, show the Roadster attached to a fitting and placed between the two halves of the payload fairing that caps the rocket. The photos were posted just hours after a picture leaked on Reddit that showed a grainy view of the car being readied for its final ride.

This will be the inaugural flight of the Falcon Heavy, a rocket that SpaceX has been planning for years. The successor to the Falcon 9 , it’s essentially (and simply put) three boosters strapped together, all of which will add enough thrust to make it the most powerful rocket in the world. It will give SpaceX the ability to send bigger payloads to space while also helping the company push farther out into the Solar System.

But SpaceX doesn’t want to put a valuable payload on the very first flight, which even Musk has admitted could end (or begin) with an explosion. So the company plans to use a “dummy payload” instead. “Test flights of new rockets usually contain mass simulators in the form of concrete or steel blocks. That seemed extremely boring,” Musk wrote on Instagram today. “Of course, anything boring is terrible, especially companies, so we decided to send something unusual, something that made us feel.”

In April, Musk said he was trying to think of the “silliest thing we can imagine” to stick on top that first Falcon Heavy rocket. And on December 1st, we learned exactly what that meant. “Payload will be my midnight cherry Tesla Roadster playing Space Oddity,” Musk wrote on Twitter. “Destination is Mars orbit. Will be in deep space for a billion years or so if it doesn’t blow up on ascent.”


After some back and forth about whether he was joking, it became clear that Musk meant what he wrote. And there’s nothing really standing in his way — as long as the car doesn’t impact Mars, there aren’t really any laws blocking the effort.


Wednesday, December 20, 2017

Elon Musk Shows Off SpaceX’s Almost Fully-Assembled Falcon Heavy Rocket

As reported by The Verge: Elon Musk has tweeted out photos of SpaceX’s almost fully assembled Falcon Heavy rocket in Cape Canaveral, Florida — the biggest and best glimpse so far into what the final iteration will look like. The rocket’s launch date is set for sometime in January and has never gotten this far in development before, so the photos do show something that’s quite promising. From the pictures, the biggest missing pieces look to be the payload and nose cone at the top.

The Falcon Heavy consists of three Falcon 9 cores strapped together and will be mostly reusable, with all three cores intended to return to Earth after launch so they can be used for other missions. Musk has said the rocket’s outer cores for this upcoming launch are previously flown Falcon 9 boosters.
As previously reported, the Falcon Heavy will be one of the most powerful rockets ever made, capable of lofting around 140,000 pounds of cargo into lower Earth orbit. But given all the delays and challenges endured by Falcon Heavy, Musk has understandably set the bar low for success. “I hope it makes it far enough away from the pad that it does not cause pad damage,” said Musk in July. “I would consider even that a win, to be honest.”

Tuesday, December 19, 2017

To Save Lives, Self-Driving Cars Must Become the Ultimate Defensive Drivers

As reported by Futurism: In early November, a self-driving shuttle and a delivery truck collided in Las Vegas. The event, in which no one was injured and no property was seriously damaged, attracted media and public attention in part because one of the vehicles was driving itself – and because that shuttle had been operating for only less than an hour before the crash.

It’s not the first collision involving a self-driving vehicle. Other crashes have involved Ubers in Arizona, a Tesla in “autopilot” mode in Florida and several others in California. But in nearly every case, it was human error, not the self-driving car, that caused the problem.

In Las Vegas, the self-driving shuttle noticed a truck up ahead was backing up, and stopped and waited for it to get out of the shuttle’s way. But the human truck driver didn’t see the shuttle, and kept backing up. As the truck got closer, the shuttle didn’t move – forward or back – so the truck grazed the shuttle’s front bumper.

As a researcher working on autonomous systems for the past decade, I find that this event raises a number of questions: Why didn’t the shuttle honk, or back up to avoid the approaching truck? Was stopping and not moving the safest procedure? If self-driving cars are to make the roads safer, the bigger question is: What should these vehicles do to reduce mishaps? In my lab, we are developing self-driving cars and shuttles. We’d like to solve the underlying safety challenge: Even when autonomous vehicles are doing everything they’re supposed to, the drivers of nearby cars and trucks are still flawed, error-prone humans.

How Crashes Happen
There are two main causes for crashes involving autonomous vehicles. The first source of problems is when the sensors don’t detect what’s happening around the vehicle. Each sensor has its quirks: GPS works only with a clear view of the sky; cameras work with enough light; lidar can’t work in fog; and radar is not particularly accurate. There may not be another sensor with different capabilities to take over. It’s not clear what the ideal set of sensors is for an autonomous vehicle – and, with both cost and computing power as limiting factors, the solution can’t be just adding more and more.

The second major problem happens when the vehicle encounters a situation that the people who wrote its software didn’t plan for – like having a truck driver not see the shuttle and back up into it. Just like human drivers, self-driving systems have to make hundreds of decisions every second, adjusting for new information coming in from the environment. When a self-driving car experiences something it’s not programmed to handle, it typically stops or pulls over to the roadside and waits for the situation to change. The shuttle in Las Vegas was presumably waiting for the truck to get out of the way before proceeding – but the truck kept getting closer. The shuttle may not have been programmed to honk or back up in situations like that – or may not have had room to back up.

The challenge for designers and programmers is combining the information from all the sensors to create an accurate representation – a computerized model – of the space around the vehicle. Then the software can interpret the representation to help the vehicle navigate and interact with whatever might be happening nearby. If the system’s perception isn’t good enough, the vehicle can’t make a good decision. The main cause of the fatal Tesla crash was that the car’s sensors couldn’t tell the difference between the bright sky and a large white truck crossing in front of the car.
If autonomous vehicles are to fulfill humans’ expectations of reducing crashes, it won’t be enough for them to drive safely. They must also be the ultimate defensive driver, ready to react when others nearby drive unsafely. An Uber crash in Tempe, Arizona, in March 2017 is an example of this.

According to media reports, in that incident, a person in a Honda CRV was driving on a major road near the center of Tempe. She wanted to turn left, across three lanes of oncoming traffic. She could see two of the three lanes were clogged with traffic and not moving. She could not see the farthest lane from her, in which an Uber was driving autonomously at 38 mph in a 40 mph zone. The Honda driver made the left turn and hit the Uber car as it entered the intersection.

A human driver in the Uber car approaching an intersection might have expected cars to be turning across its lane. A person might have noticed she couldn’t see if that was happening and slowed down, perhaps avoiding the crash entirely. An autonomous car that’s safer than humans would have done the same – but the Uber wasn’t programmed to.

Improve Testing
That Tempe crash and the more recent Las Vegas one are both examples of a vehicle not understanding the situation enough to determine the correct action. The vehicles were following the rules they’d been given, but they were not making sure their decisions were the safest ones. This is primarily because of the way most autonomous vehicles are tested.

The basic standard, of course, is whether self-driving cars can follow the rules of the road, obeying traffic lights and signs, knowing local laws about signaling lane changes, and otherwise behaving like a law-abiding driver. But that’s only the beginning.


Wednesday, December 6, 2017

Insurance Companies Are Now Offering Discounts if You Let Your Tesla Drive Itself

As reported by Futurism: While accidents have happened, one of the most appealing things about autonomous vehicles is their capacity to make our roads a safer place. Now, insurance companies are starting to offer financial incentives to promote adoption.

Britain’s largest automobile insurance company, Direct Line, has announced a 5 percent discount for customers who activate Autopilot functionality in their Tesla. It follows in the footsteps of Root, a startup that offers a similar promotion across nine states in the US.

It should be noted that Direct Line’s discount shouldn’t be taken as an endorsement of the technology, at least for the time being. The company is encouraging customers to utilize Autopilot so that it can collect data on whether or not it contributes to safer driving, so that insurance premiums can be adjusted as a result.

“At present the driver is firmly in charge so it’s just like insuring other cars, but it does offer Direct Line a great opportunity to learn and prepare for the future,” the company’s head of motor development, Dan Freedman, told Reuters.

Tesla Crash Test
In May 2016, the driver of a Tesla Model S using Autopilot mode was killed when his vehicle collided with an 18-wheeler truck at a highway intersection. However, a subsequent report by the National Highway Traffic Safety Administration (NHTSA) largely exonerated the automaker.

The NHTSA found that the crash rate of Tesla vehicles dropped by nearly 40 percent when Autosteer was activated. Elon Musk has since pledged that future improvements to the Autopilot system will contribute to a 90 percent reduction in accidents.

Data published by the Association for Safe International Road Travel asserts that over 37,000 people die as a result of car accidents every year in the US, with some 2.35 million suffering injuries. Furthermore, there are some 1.3 million deaths related to car accidents worldwide every year. The NHTSA has previously released data that states that almost 95 percent of crashes are caused by drivers.

These figures could be reduced significantly if autonomous driving systems were more widely used. Self-driving cars will be safest when there are no human drivers on the road, because their ability to communicate with one another won’t be subject to the same misunderstandings.

It’s easy to see reduced insurance premiums being used to convince drivers to cede control to their cars on a broader scale. At some point, we might even see the need for individual insurance disappear completely.

When companies are sufficiently confident in their self-driving vehicles, they might take on the responsibility, agreeing to pay any damages in case of an accident. This would likely push the automotive industry toward a model where cars are predominantly leased, rather than owned. Looking further forward, traveling by car might resemble the Tesla-centric autonomous taxi service that’s currently being implemented in Dubai.

Saturday, December 2, 2017

SpaceX will use the first Falcon Heavy to send a Tesla Roadster to Mars, Elon Musk says

As reported by The Verge: Always willing to up the stakes of an already difficult situation, SpaceX CEO Elon Musk has said the first flight of his company’s Falcon Heavy rocket will be used to send a Tesla Roadster into space. Musk first tweeted out the idea on Friday evening, but has since separately confirmed his plans with The Verge.

The first Falcon Heavy’s “payload will be my midnight cherry Tesla Roadster playing Space Oddity,” Musk wrote on Twitter, referencing the famous David Bowie song. “Destination is Mars orbit. Will be in deep space for a billion years or so if it doesn’t blow up on ascent.”

Musk has spoken openly about the non-zero chance that the Falcon Heavy will explode during its first flight, and because of that he once said he wanted stick the “silliest thing we can imagine” on top of the rocket. Now we know what he meant. It’s unclear at the time of publish whether SpaceX has received any necessary approvals for this plan.

Falcon Heavy is the followup to SpaceX’s Falcon 9. It’s a more powerful rocket that the company hopes to use for missions to the Moon and Mars. It was originally supposed to take flight back in 2013 or 2014, but its maiden flight is now pegged for January 2018, according to Musk. (The company has been testing parts of the Falcon Heavy architecture over the last year, and has been busy readying the same launchpad that the Apollo 11 mission blasted off from for this flight.)

Falcon Heavy is, in overly simple terms, three of the company’s Falcon 9 rockets strapped together. It therefore will be capable of creating around three times the thrust of a single Falcon 9 rocket, allowing SpaceX to perform missions beyond low Earth orbit.

SpaceX also ultimately plans to be able to recover all three rocket cores that power the Falcon Heavy, just like it’s done over the last year with main rocket booster stage of its Falcon 9s. It’s unclear if the company will attempt to recover the boosters of this maiden rocket.

Of course, Musk also said earlier this fall at the International Astronautical Congress that he plans to pour all of SpaceX’s resources into an even bigger rocket architecture, known as the Interplanetary Transport System (or Big F**king Rocket, for short).

That new mega-rocket, when built, would essentially obsolesce the Falcon Heavy and the Falcon 9. It will be capable of taking on the same duties that those rockets perform, while adding new capabilities that range from planting a colony on Mars to making 30-minute transcontinental travel possible on Earth.

In that light, maybe shooting a Tesla into orbit around the Red Planet doesn’t seem so outlandish.


Thursday, November 30, 2017

US Supreme Court Considers if Your Privacy Rights Include Location Data

As reported by Engadget: With all the attention focused on the FCC's upcoming vote to dismantle net neutrality protections, it's easy to have missed an upcoming hearing that has the potential to reshape electronic-privacy protection. Today, the Supreme Court is hearing arguments in Carpenter v. United States — and at issue is cellphone-tower location data that law enforcement obtained without a warrant.

Defendant Timothy Carpenter, who was convicted as the mastermind behind two years of armed robberies in Michigan and Ohio, has argued that his location data, as gathered by his cellphone service provider, is covered under the Fourth Amendment, which protects citizens against "unreasonable searches and seizures." Thus far, appeals courts have upheld the initial decision that law enforcement didn't need a warrant to acquire this data, so the Supreme Court is now tasked with determining whether this data is deserving of more-rigorous privacy protection.

This case has been going on for years, so let's get some background details out of the way. Amy Howe, formerly a reporter and editor for SCOTUSblogdescribes how law enforcement asked cellphone service providers for details on 16 phone numbers tied to the crimes, including Carpenter's number and that of a co-defendant. That data included months of cell-site-location information (CSLI) that shows precise GPS coordinates of cellphone towers plus the date and time that a phone tried to connect to the tower in question. The FBI used this to create a map of where the phone and its owner were at any given time. The FBI received multiple months of data, not just data for the days, and was never asked to produce a warrant.

The FBI's explanation, which the courts have thus far backed up, relies on a legal principle known as the third-party doctrine. Jennifer Lynch from the Electronic Frontier Foundation explains that the third-party doctrine states that information you voluntarily share with "someone else" isn't protected by the Fourth Amendment, because third parties aren't legally bound to keep the info you shared with them private. And the definition of "someone else" is quite broad -- in this case, the courts view the data that cellphone providers collect as something customers are voluntarily sharing, simply by using their services.

Carpenter has argued that the third-party doctrine was not intended to be applied to things like cellphones. That's largely because the legal backing of the third-party doctrine is based on two Supreme Court cases from the 1970s, years before the first cellphone even went on sale to the public. Simply put, the way courts are ruling on third-party doctrine doesn't make sense in an age when so much sensitive information is bound up in our cellphones.

There's also a 2012 Supreme Court case that could back up Carpenter's argument. In United States v. Jones, the Supreme Court unanimously ruled that it was a Fourth Amendment violation to attach a GPS unit to a car without a search warrant. The FBI had planted the GPS onto a car parked on private property and used it to track its position every 10 seconds for a full month. That's more granular than the location info you get from a cellphone, but the cases do have some similarities.

After the court's decision, Justice Sonia Sotomayor wrote that the third-party doctrine was "ill suited to the digital age" and expressed her opinion that privacy case law was failing to keep up with the rapid changes that smartphones and other technology are making to how we as a society view privacy. "People disclose the phone numbers that they dial or text to their cellular providers, the URLS that they visit and the e-mail addresses with which they correspond to their Internet service providers, and the books, groceries and medications they purchase to online retailers," she wrote. "I would not assume that all information voluntarily disclosed to some member of the public for a limited purpose is, for that reason alone, disentitled to Fourth Amendment protection."

Some of the world's biggest tech companies, including Apple, Facebook, Microsoft, Google, Twitter and even Verizon agree with Sotomayor. In August, a total of 15 companies filed an amici curiae brief related to the Carpenter case in which they argue that "fourth amendment doctrine must adapt to the changing realities of the digital era" and that "rigid analog-era rules should yield to consideration of reasonable expectations of privacy in the digital age." Of course, this argument may not win over the Supreme Court, but its ruling in the 2012 GPS case shows that the justices could be in favor of stronger privacy protection.

Unfortunately for those who believe in expanded privacy rights, lower courts have so far sided with the third-party doctrine when it comes to CSLI. Lynch writes that "five federal appellate courts, in deeply divided opinions, have held that historical CSLI isn't protected by the Fourth Amendment -- in large part because the information is collected and stored by third-party service providers." We'll find out soon whether the Supreme Court is ready to break with those past rulings, a move that could lead both to freedom for Timothy Carpenter and a new precedent for privacy in the age of the smartphone.

Wednesday, November 29, 2017

Velodyne’s Latest LIDAR lets Driverless Cars Handle High-Speed Situations

Discerning a butterfly from a broken tire at 70 MPH.
As reported by The Verge: Self-driving cars from Alphabet’s Waymo are currently cruising the streets of suburban Arizona, navigating around with no human at the ready to take the wheel should something go wrong. It’s some of the most advanced testing we’ve seen so far, reaching what’s know as Level 4 autonomy. These cars can operate without any human input, but only under certain conditions and on certain roads.


Velodyne, one of the leading manufacturers of laser sensor for self driving cars, made an announcement this morning that it hopes will push things to the next level. The company released details of its latest product, the VLS-128. It’s the most powerful LIDAR the company has ever created, with twice the range and three times the resolution of its predecessor. “This product was designed and built for the level 5, fully autonomous, mobility as a service market,” says Anand Gopalan, the company’s CTO, meaning it can perform as well or better than a human under any circumstances.
On the bottom, the view from Velodyne's HDL-64.  On top, the more detailed view of the VLS-128.
The most important sensor in most self-driving cars these days is LIDAR, a laser scanner that can provide a 360-degree view of what’s happening around the vehicle. To illustrate the capabilities of the 128, Gopalan gives an example of a particularly challenging situation. “There is a small black object far out in front of you. Is it a piece of paper, a butterfly, or some tire debris? The autonomous vehicle needs to be able to see this object and make a decision about whether it should change lanes or break, and then take action. Traveling at 70 miles per hour, you have precious little time to do this.”


Gopalan says that the 128 can handle this sort of edge case. It’s 300 meter range and incredible detail are one part of the equation. But it also works for tricky scenarios like tire debris because it allows an autonomous driving system to take fewer steps between seeing the world and deciding what to do. “With lower-resolution LIDAR you would need to somehow fuse the data with cameras and do some processing to create something that can be understood by the computer,” he tells The Verge. “You now have such a high-resolution image, you can take the data and put it directly into an image classification algorithm. It reduces complexity and time.”
Just last month Nvidia claimed its computer vision systems are ready for Level 5 autonomy. Intel has made similar noises. And Waymo, in demonstrating its latest system to our reporter, touted its ability to see and identify debris on the road.

Of course, Velodyne doesn’t compete directly with either of these companies. Its LIDAR system is a complement to the chips sold by companies like Nvidia. And while some automakers are working on building their own LIDAR in house, most are turning to suppliers like Velodyne as they look to build driverless car services that would compete with the likes of Waymo and Uber.

Velodyne says that it managed to add more range and resolution to its latest unit, while simultaneously reducing the size, weight, and power consumption. For now, however, it’s staying mum on the price. In fact, says founder and CEO David Hall, price isn’t really the point. “We took a cost is no issue approach with this thing,” says Hall. “The mobility-as-a-service customer would just as soon have a higher end LIDAR. The costs aren’t that high, when compared to the value of not having a driver.”