Search This Blog

Wednesday, August 24, 2016

NVIDIA's Made-For-Autonomous-Cars CPU is Freaking Powerful

As reported by Engadget:NVIDIA debuted its Drive PX2 in-car supercomputer at CES in January, and now the company is showing off the Parker system on a chip powering it. The 256-core processor boasts up to 1.5 teraflops of juice for "deep learning-based self-driving AI cockpit systems," according to a post on NVIDIA's blog. That's in addition to 24 trillion deep learning operations per second it can churn out, too. For a perhaps more familiar touchpoint, NVIDIA says that Parker can also decode and encode 4K video streams running at 60FPS -- no easy feat on its own.

However, Parker is significantly less beefy than NVIDIA's 
other deep learning initiative, the DGX-1 for Elon Musk's OpenAI, which can hit 170 teraflops of performance. This platform still sounds more than capable of running high-end digital dashboards and keeping your future autonomous car shiny side up without a problem, regardless.

On that front, NVIDIA says that in addition to the previously-announced partnership with Volvo (which puts Drive PX2 into the XC90), there are currently "80 carmakers, tier 1 suppliers and university research centers" using Drive PX2 at the moment.

Thursday, August 18, 2016

Uber’s First Self-Driving Fleet Arrives in Pittsburgh This Month

As reported by BloombergNear the end of 2014, Uber co-founder and Chief Executive Officer Travis Kalanick flew to Pittsburgh on a mission: to hire dozens of the world’s experts in autonomous vehicles. The city is home to Carnegie Mellon University’s robotics department, which has produced many of the biggest names in the newly hot field. Sebastian Thrun, the creator of Google’s self-driving car project, spent seven years researching autonomous robots at CMU, and the project’s former director, Chris Urmson, was a CMU grad student. 

“Travis had an idea that he wanted to do self-driving,” says John Bares, who had run CMU’s National Robotics Engineering Center for 13 years before founding Carnegie Robotics, a Pittsburgh-based company that makes components for self-driving industrial robots used in mining, farming, and the military. “I turned him down three times. But the case was pretty compelling.” Bares joined Uber in January 2015 and by early 2016 had recruited hundreds of engineers, robotics experts, and even a few car mechanics to join the venture. The goal: to replace Uber’s more than 1 million human drivers with robot drivers—as quickly as possible.

The plan seemed audacious, even reckless. And according to most analysts, true self-driving cars are years or decades away. Kalanick begs to differ. “We are going commercial,” he says in an interview with Bloomberg Businessweek. “This can’t just be about science.”

Starting later this month, Uber will allow customers in downtown Pittsburgh to summon self-driving cars from their phones, crossing an important milestone that no automotive or technology company has yet achieved. Google, widely regarded as the leader in the field, has been testing its fleet for several years, and Tesla Motors offers Autopilot, essentially a souped-up cruise control that drives the car on the highway. Earlier this week, Ford announced plans for an autonomous ride-sharing service. But none of these companies has yet brought a self-driving car-sharing service to market.

Uber’s Pittsburgh fleet, which will be supervised by humans in the driver’s seat for the time being, consists of specially modified Volvo XC90 sport-utility vehicles outfitted with dozens of sensors that use cameras, lasers, radar, and GPS receivers. Volvo Cars has so far delivered a handful of vehicles out of a total of 100 due by the end of the year. The two companies signed a pact earlier this year to spend $300 million to develop a fully autonomous car that will be ready for the road by 2021.

The Volvo deal isn’t exclusive; Uber plans to partner with other automakers as it races to recruit more engineers. In July the company reached an agreement to buy Otto, a 91-employee driverless truck startup that was founded earlier this year and includes engineers from a number of high-profile tech companies attempting to bring driverless cars to market, including Google, Apple, and Tesla. Uber declined to disclose the terms of the arrangement, but a person familiar with the deal says that if targets are met, it would be worth 1 percent of Uber’s most recent valuation. That would imply a price of about $680 million. Otto’s current employees will also collectively receive 20 percent of any profits Uber earns from building an autonomous trucking business.

Otto has developed a kit that allows big-rig trucks to steer themselves on highways, in theory freeing up the driver to nap in the back of the cabin. The system is being tested on highways around San Francisco. Aspects of the technology will be incorporated into Uber’s robot livery cabs and will be used to start an Uber-like service for long-haul trucking in the U.S., building on the intracity delivery services, like Uber Eats, that the company already offers.

The Otto deal is a coup for Uber in its simmering battle with Google, which has been plotting its own ride-sharing service using self-driving cars. Otto’s founders were key members of Google’s operation who decamped in January, because, according to Otto co-founder Anthony Levandowski, “We were really excited about building something that could be launched early.”Levandowski, one of the original engineers on the self-driving team at Google, started Otto with Lior Ron, who served as the head of product for Google Maps for five years; Claire Delaunay, a Google robotics lead; and Don Burnette, another veteran Google engineer. Google suffered another departure earlier this month when Urmson announced that he, too, was leaving.



“The minute it was clear to us that our friends in Mountain View were going to be getting in the ride-sharing space, we needed to make sure there is an alternative [self-driving car],” says Kalanick. “Because if there is not, we’re not going to have any business.” Developing an autonomous vehicle, he adds, “is basically existential for us.” (Google also invests in Uber through Alphabet’s venture capital division, GV.)

Unlike Google and Tesla, Uber has no intention of manufacturing its own cars, Kalanick says. Instead, the company will strike deals with auto manufacturers, starting with Volvo Cars, and will develop kits for other models. The Otto deal will help; the company makes its own laser detection, or lidar, system, used in many self-driving cars. Kalanick believes that Uber can use the data collected from its app, where human drivers and riders are logging roughly 100 million miles per day, to quickly improve its self-driving mapping and navigation systems. “Nobody has set up software that can reliably drive a car safely without a human,” Kalanick says. “We are focusing on that.”

In Pittsburgh, customers will request cars the normal way, via Uber’s app, and will be paired with a driverless car at random. Trips will be free for the time being, rather than the standard local rate of $1.05 per mile. In the long run, Kalanick says, prices will fall so low that the per-mile cost of travel, even for long trips in rural areas, will be cheaper in a driverless Uber than in a private car. “That could be seen as a threat,” says Volvo Cars CEO Hakan Samuelsson. “We see it as an opportunity.”

Although Kalanick and other self-driving car advocates say the vehicles will ultimately save lives, they face harsh scrutiny for now. In July a driver using Tesla’s Autopilot service died after colliding with a tractor-trailer, apparently because both the driver and the car’s computers didn’t see it. (The crash is currently being investigated by the National Highway Traffic Safety Administration.) Google has seen a handful of accidents, but they’ve been less severe, in part because it limits its prototype cars to 25 miles per hour. Uber’s cars haven’t had any fender benders since they began road-testing in Pittsburgh in May, but at some point something will go wrong, according to Raffi Krikorian, the company’s engineering director. “We’re interacting with reality every day,” he says. “It’s coming.”

For now, Uber’s test cars travel with safety drivers, as common sense and the law dictate. These professionally trained engineers sit with their fingertips on the wheel, ready to take control if the car encounters an unexpected obstacle. A co-pilot, in the front passenger seat, takes notes on a laptop, and everything that happens is recorded by cameras inside and outside the car so that any glitches can be ironed out. Each car is also equipped with a tablet computer in the back seat, designed to tell riders that they’re in an autonomous car and to explain what’s happening. “The goal is to wean us off of having drivers in the car, so we don’t want the public talking to our safety drivers,” Krikorian says.

On a recent weekday test drive, the safety drivers were still an essential part of the experience, as Uber’s autonomous car briefly turned un-autonomous, while crossing the Allegheny River. A chime sounded, a signal to the driver to take the wheel. A second ding a few seconds later indicated that the car was back under computer control. “Bridges are really hard,” Krikorian says. “And there are like 500 bridges in Pittsburgh.”


Bridges are hard in part because of the way that Uber’s system works. Over the past year and a half, the company has been creating extremely detailed maps that include not just roads and lane markings, but also buildings, potholes, parked cars, fire hydrants, traffic lights, trees, and anything else on Pittsburgh's streets. As the car moves, it collects data, and then using a large, liquid-cooled computer in the trunk, it compares what it sees with the preexisting maps to identify (and avoid) pedestrians, cyclists, stray dogs, and anything else. Bridges, unlike normal streets, offer few environmental cues—there are no buildings, for instance—making it hard for the car to figure out exactly where it is. Uber cars have Global Positioning System sensors, but those are only accurate within about 10 feet; Uber’s systems strive for accuracy down to the inch.

When the Otto acquisition closes, likely this month, Otto co-founder Levandowski will assume leadership of Uber’s driverless car operation, while continuing to oversee his company's robotic trucking business. The plan is to open two additional Uber R&D centers, one in the Otto office, a cavernous garage in San Francisco’s Soma neighborhood, a second in Palo Alto. “I feel like we’re brothers from another mother,” Kalanick says of Levandowski.

The two men first met at the TED conference in 2012, when Levandowski was showing off an early version of Google’s self-driving car. Kalanick offered to buy 20 of the prototypes on the spot—“It seemed like the obvious next step,” he says with a laugh—before Levandowski broke the bad news to him. The cars were running on a loop in a closed course with no pedestrians; they wouldn't be safe outside the TED parking lot. “It was like a roller coaster with no track,” Levandowski explains. “If you were to step in front of the vehicle, it would have just run you over.”

Kalanick began courting Levandowski this spring, broaching the possibility of an acquisition during a series of 10-mile night walks from the Soma neighborhood where Uber is also headquartered to the Golden Gate Bridge. The two men would leave their offices separately—to avoid being seen by employees, the press, or competitors. They’d grab takeout food, then rendezvous near the city’s Ferry Building. Levandowski says he saw a union as a way to bring the company’s trucks to market faster. 

For his part, Kalanick sees it as a way to further corner the market for autonomous driving engineers. “If Uber wants to catch up to Google and be the leader in autonomy, we have to have the best minds,” he says, and then clarifies: “We have to have all the great minds.”

Astronauts Are About to Install a Parking Space for SpaceX and Boeing

As reported by Popular Mechanics:Starting in 2017, Boeing and SpaceX will become the first private companies to send NASA astronauts into orbit. When that happens, the International Space Station is going to need a new space parking space. So, on Friday, astronauts Jeff Williams and Kate Rubins will venture outside the ISS to finish installing a new docking adapter.

Installing these adapters is a necessary step in NASA's Commercial Crew Program, which seeks to spur development of commercial crew spacecraft. Since the space shuttle was discontinued in 2011, NASA has had to pay Russia million to take its astronauts to the ISS. The Commercial Crew Program will give NASA cheaper options in crewed spaceflight.

The spacewalk is scheduled to begin at 8:05 a.m. on Friday, and live coverage will start at 6:30. This will be Williams' fourth spacewalk, and Rubins' first. Here is a video describing exactly what the spacewalk will entail:



Wednesday, August 17, 2016

Beartooth Turns Your Smartphone Into An Industrial Radio

As reported by Gadget ReviewAll a smartphone is, is a radio with a computer bolted to it. Seriously. That’s it; it’s just radio waves, the same radio waves pumping Today’s Classic Hits through the atmosphere. Of course, smartphones aren’t on the same channel as professional, industrial radios… but Beartooth makes it possible for them to get on those frequencies.

A Radio On Your Radio

Essentially, Beartooth is a radio you attach to your smartphone, and which interacts with your smartphone in various ways using publicly available radio bandwidth. It’s not dissimilar to the GoTenna in that respect, but Beartooth adds a few elements, especially in the physical realm, which make it much more interesting to the outdoorsy and those who just like fiddling with microwave radiation.

The Citizen’s Band, Improved

Essentially, it’s a CB radio with all the features you want from a smartphone. For example, you can have one-to-one calls, or group in more people, although they will need a Beartooth to participate. If you’ve got a friend you want to send a text message to, that’s as easy as using your phone. There’s even a confirmation of message receipt built into the phone, and you can also send out an SOS signal if you need it, making this crucial for campers and outdoorsy types. Also useful is the fact that another battery is built into the case, doubling the life of your phone.
The Great Outdoors
The simple fact of the matter is that if you’re going outdoors and getting out of contact range, you should probably have a radio, signal beacon, or other tool available to reach help if you get stuck. So, just by that standard, the Beartooth is a useful tool to have in your camping gear. And, also, it means you can send text messages to your friends when you’re waiting for the food to cook, so that’s useful time spent right there.

With Walabot, Your Phone Can "See" Through Walls

As reported by FastCompanyIn the first demo, Raviv Melamed, CEO and cofounder of Vayyar and Walabot, uses the camera on his phone to see through our conference room table and detect the number of fingers he's holding up beneath the surface. Next, there's a video of a person walking down a hall and moving behind a barrier; the technology senses the human form even though it's no longer visible to our eyes. Then comes a clip of vodka being poured past a couple of sensors to determine the purity of the alcohol on a microscopic level.

This superhero-like X-ray vision comes courtesy of a new microchip-based, radio frequency sensor technology. It can be used to analyze and create 3-D images of pretty much anything behind or inside objects (the only thing it can't "see" through is metal). Radio frequency tech has been around for decades; what makes this chip innovative is that it instantly transforms RF waves into digital output. Radio waves emitted from the chip sense how much of the signal is absorbed by an object in its path. Algorithms can then be applied to the digital data translated from the RF signals and determine all kinds of information about those objects: their density, dimensions, and using software, what those objects actually are.
Though it's not the first technology capable of turning RF into digital, it's certainly among the smallest and least expensive—the sensor could fit inside a mobile phone. Vayyar, which created the chip technology and is selling Walabot, is based in Tel Aviv with 36 employees, and has raised $34 million in funding. It's part of the rapidly evolving imaging technology sector that includes the handheld ultrasound device from Butterfly Network Inc., the space-mapping camera fromMatterport, and the security scanner that can detect weapons underneath clothing from the U.K.-based company Radio Physics.
The first application of Vayyar's chip is medical: It's being developed to detect tumors in breast tissue. Since it can be produced at a fraction of the cost, and physical size, of today's solutions, it potentially makes breast cancer screening accessible and affordable to people around the globe.

What else could it be used for? That's where you come in. Walabot is being released publicly in April so that robot makers and hardware tinkerers can build their own apps for Android, Raspberry Pi, or most any other computer with a USB connection. "Why limit the technology for one startup when you can actually go and allow other people to innovate?" says Melamed, a former Intel executive and Israeli Defense Forces engineer.

Walabot has seemingly endless potential applications. It could be used to analyze your breathing while you sleep, or examine root structures in your garden, or track the speed of cars racing past your house. And when it comes to video gaming, Melamed says this technology is far more accurate than any other single motion sensor currently on the market. It could help untether VR headsets by pairing with sensors placed on the body—perhaps simple bands around players' arms and legs.

Melamed uses the example of a simple virtual ping-pong match. Right now, the only moving body parts would be the head and a hand, since that's all that can be tracked. "I want to see your body, I want to see your movement, right?" says Melamed. "You have those other technologies, like accelerometers—the problem with accelerometers is they drift. What we can do with this technology is actually put several sensors on your body and track your body in a room like 5 meters by 5 meters, to the level of a centimeter, and now this is a totally different kind of feeling, I can actually see your limbs and we don't drift."
Of course, accelerometer-based technologies like the Gear VR, Oculus Rift, and Google Cardboard have all addressed and continue to minimize the drift issue by applying other sensor-based technologies to their processes—and there are other companies on the market that are attempting to bring the full-body experience to VR via sensors placed on the body. The difference is that RF technology can be deployed for virtual reality pretty successfully without the aid of other devices like accelerometers or magnetometers.
How does Vayyar's technology differ from something like the Kinect? Well, for one thing, the Kinect is primarily a camera-based optics system, while Vayyar's system is radio frequency based. Kinect works pretty well in the dark, but radio frequency works without any light. For another, the Kinect offers 30 frames per second, while Melamed claims that Vayyar's technology can process 100 frames per second.

This breakthrough in imaging tech is what can happen when a successful executive moves on to the next chapter. Melamed was the vice president of Architecture and General Manager of Mobile Wireless at Intel back in 2010, when he asked himself: What's next? He began looking at medical imaging, which still relies on technology that was developed before mobile computing made chips and sensors low cost and lightweight.
"I started to ask the questions, 'How come there is no simple, easy-to-use modality that you can bring to people instead of these big machines that cost so much money?" he recalls. "Breast cancer was a huge problem to solve and a big market, so a good combination," he adds (it's also an illness that touched his life personally when his mother developed the disease). Melamed decided to apply expertise from both his work inside the Israeli Defense Force 20 years ago and his time at Intel working with high-end communication chips to build a better detection system for breast cancer—and soon realized that the radio transmission-based technology he was developing could be applied to a range of industries and problems. Vayyar was born.



In the lead-up to its April launch, the Walabot team is giving the technology to a handful of leading makers and holding internal hackathons, and studying how people play with the Vayyar chip. "The whole point is to start a community around it and have people kind of play around with it and develop around it," Melamed says, adding that he hopes developers will share their application code with each other.
Melamed says he believes great innovations occur when people find ways to apply technological breakthroughs in one industry to another—and thinks his work on Vayyar, which combines his experiences in the digital communications world with radio frequency technology, is a prime example of that. "I think a lot of the breakthroughs came when people took one technology from one industry and implemented it in a totally different industry, and that is basically what we are trying to do here," he says. "I went and looked at what people did in the past, and for the last 30 years people are trying to do breast cancer imaging or any other imaging with radars with radio frequencies—but they kept bouncing into the same problems," Melemed says. "With the architecture we are using in communication and the things that we are able to do over there, bringing technology from that world into this world, that's kind of created that breakthrough, the ability to put so many transceivers and such a high-end kind of technology into a small silicon.



Walabot will be sold in three models—each a bit more powerful than the next—and cost between $149 and $599. It started shipping in April 2016 when the public developer API also became available.

Monday, August 15, 2016

Audi Cars Will Start Talking to City Traffic Systems This Fall

As reported by EngadgetIf you've ever been stuck at a red light that seems to last an eternity, you'll be happy to know that Audi announced that it's going to start work with municipalities to tell its cars when a light is about to turn green. The automaker says this is the first step in a Vehicle to Infrastructure (V2I) partnership with cities that will be launching this fall.

The Audis won't be talking to the traffic lights directly, instead the vehicles will use their built-in LTE connection to get information from a participating city's central traffic control system. Using that data and GPS, the cars will be able to show on the dashboard when an upcoming signal will turn green.



The system does not use the upcoming DSRC V2V (vehicle to vehicle)/ V2I (Vehicle to Infrastructure) standard. Instead it uses partner Traffic Technology Services to establish a data relationship with the municipalities. As a vehicle enters a "zone" it requests a one-time unique token to establish communication with the infrastructure to request the stop light phase.

As for DSRC, Audi product and technology communications senior specialist, Justin Goduto said it's not quite ready for widespread deployment yet. Audi wants to move forward now. "For the time being using this methodology gives us true integration to the infrastructure," Goduto said.

The technology needed to get all that green light information is available in 2017 Audi Q7, A4 and A4 Allroad vehicles built after June 1, 2016. Drivers will also need to subscribe to Audi Connect Prime. As for the cities, the automaker isn't ready to announce where the V2I infrastructure will roll out first. But it hopes the system will be working in five to seven metropolitan areas by the end of the year.

SpaceX Nails a Tricky Fourth Rocket Landing at Sea

As reported by Engadget:SpaceX is good enough at sea-based rocket landings that they've nearly become commonplace. The private spaceflight outfit has successfully landed a Falcon 9 rocket aboard a drone ship for the fourth time, or its sixth landing overall. And this wasn't a particularly easy trip, either. On top of the inherent challenges of a sea landing, the destination for the rocket's payload (the JCSAT-16 communications satellite) meant that the vehicle had to contend with both "extreme velocities" and high re-entry heat.

No, SpaceX still hasn't reused a rocket yet -- that's happening in the fall. However, the touchdown suggests that the company might just meet its objective of launching a rocket every two weeks by the end of 2016. There are no guarantees that it'll land every time (just ask SpaceX what happened in June), but the success rate is now consistent enough that Elon Musk and crew can expect that rockets will return intact.



First stage landing confirmed on the droneship. Second stage & JCSAT-16 continuing to orbit http://www.spacex.com/webcast