Search This Blog

Wednesday, January 7, 2015

Mercedes-Benz F015 Autonomous Car Concept (Video)

As reported by MotorAuthority: The hydrogen-electric plug-in hybrid system powering the F015 is relatively unique in that it produces zero emissions at all times. Mercedes hasn’t gone into detail but says the drivetrain consists of a hydrogen fuel cell stack, a lithium-ion battery and two electric motors rated at 134 horsepower each.

With a full battery (which can be charged at home via cable-less inductive means), the F015 is capable of driving up to 124 miles. And with a full tank of hydrogen, this range increases to 684 miles, by having the fuel cell top up the battery whenever the charge is running low. The setup is an evolution of the one in 2011’s F125! concept.

Then there is the autonomous tech, which relies not only on sensors but also on Car-2-Car and Car-2-Object communications technology. Already previewed in the S-Class-based S500 INTELLIGENT DRIVE prototype, multiple sensors, advanced 3D cameras and highly-detailed digital maps help drive the car without the need for an actual driver.

Inside the F015 is a variable seat system with four lounge-style chairs that can rotate far enough that they all face each other. When the car reaches its destination, the seats then rotate towards the door for an easy exit for passengers.

A central idea of ​​the concept is a continuous exchange of information between vehicles, passengers and the outside world. For interaction within the vehicle, the passengers rely on six display screens located around the cabin. They also interact with the vehicle through gestures, eye-tracking or by touching the high-resolution screens. Outside, the F015 uses laser projection, radio signals and LED displays to communicate with its surroundings.

The F015 clearly isn’t intended for production but it does show that Mercedes has the technology today to deliver a zero-emission vehicle that can drive itself and offer all the luxury one would expect of the brand. The automaker is hoping the concept’s introduction will drive the social discourse on mobility and the design of the urban environment ahead. This is particularly important in the area of legislation for autonomous cars, which is likely to remain the biggest hurdle for the technology’s introduction to the market.

"The single most important luxury goods of the 21st century are private space and time," Mercedes-Benz chief Dieter Zetsche said at last night's reveal. "Autonomously driving cars by Mercedes-Benz shall offer exactly that—with the F015 Luxury in Motion, this revolutionary concept of mobility becomes tangible for the first time."

Zetsche went on to explain that Mercedes has a “master plan” in place to take the big leap required to see autonomous cars move from the concept to production phase. The F015, he said, demonstrates where this may take us.

Jaguar Demos a Car That Keeps an Eye on Its Driver

As reported by MIT Technology Review: Many cars already include plenty of sensors—cameras for spotting objects in your blind spot, for instance—but they’re usually keeping an eye on the outside world, not on what’s going on behind the wheel.

An Australian company called Seeing Machines is turning sensing inward with technology that focuses on drivers themselves in hopes of reducing distracted and drowsy driving. The company is using cameras and software to detect eye and facial movements so it can alert drivers who have become inattentive, either due to drowsiness or distraction. This kind of technology is set to become more common, especially as cars become more capable of driving themselves on some stretches of road.

So far, the company has focused its technology on drivers of heavy industrial trucks used in the mining industry, but it’s also moving toward the consumer market: it has a deal with auto parts maker Takata to bring its technology to cars and other vehicles. At the International Consumer Electronics Show in Las Vegas this week, Seeing Machines will demonstrate its attention-monitoring sensors in the dashboard of a Jaguar F-Type.

Built-in systems for tracking drivers’ attention are an option on a still small but growing number of cars from companies such as Lexus. Some companies, like Google, paint a future where cars are completely automated.

Seeing Machines’ technology uses a small infrared camera fitted to the dashboard that works with software running on the vehicle—not on a remote server—to evaluate whether the driver is looking at the road. It evaluates a person’s head position, facial expression, and blinking rate. The camera captures 60 frames per second, and the software analyzes the images to determine the driver’s alertness.

Nick Langdale-Smith, Seeing Machines’ vice president for company partnerships, says the company can get a good read on a driver’s state by tracking his pupils in particular. The software measures eyeball rotation and detects where the driver’s line of sight intersects with objects around the driver. This allows the software to determine the amount of time a driver spends looking at the dashboard, car mirrors, road, and elsewhere—which can help it judge whether a driver is paying attention to traffic or starting to doze off.

If the system senses you’re not paying attention, it will alert you to put your eyes back on the road or pull over. For companies that are already using the system in their trucks, this comes via a seat-based buzz, though Langdale-Smith says this won’t be the case in an eventual consumer version of the technology. “This is there to save your life,” he says.
The device sets off an alarm and a vibration
in the driver's seat.
To date, Seeing Machines has honed its technology in the mining industry, where Caterpillar and other makers of huge vehicles that transport earth and minerals are using it to monitor drivers. “The shift is long and the task is boring,” says Langdale-Smith. “And when [drivers] fall asleep, the vehicles turn into 450-ton juggernauts.”

Seeing Machines’ deal with Takata, announced in September, will include the installation of driver-monitoring systems in cars made by an unnamed major automaker—it’s not clear when this will happen, though. 

Tuesday, January 6, 2015

CES 2015: Toyota Opens Patents on Hydrogen Fuel Cell Technology

As reported by the LA Times: Hoping to speed development of hydrogen fuel cell vehicles, Toyota said Monday that it would offer thousands of patents on related technologies to rival automakers, for free.

The announcement, made at the annual Consumer Electronics Show in Las Vegas, echoes a similar move by electric car maker Tesla in 2014, when Chief Executive Elon Musk made Tesla patents available to all, hoping to spur innovation in the electric vehicle world (and, perhaps, to draw publicity.)

Toyota has similar goals for the fuel-cell car market.

“At Toyota, we believe that when good ideas are shared, great things can happen,” Bob Carter, senior vice president at Toyota, said before the announcement. “The first generation hydrogen fuel cell vehicles, launched between 2015 and 2020, will be critical, requiring a concerted effort and unconventional collaboration.”

Toyota will make 5,680 patents available to automakers to build and sell their own fuel cell vehicles. Parts suppliers, energy companies and bus manufacturers can also use the patents, which remain royalty-free through 2020.

And 70 patents are directly related to hydrogen fueling stations, a move both Toyota and analysts say could spur the wider adoption of hydrogen electric vehicles.

"I think overall it makes sense," said Devin Lindsay, principle powertrain anaylst with IHS Automotive. "Right now the automakers all need to help each other, and more infrastructure is going to help kick-start the industry."

The patents also relate to Toyota’s upcoming Mirai hydrogen fuel cell car, which is slated to hit the U.S. market in October and is already on sale in Japan for the equivalent of about $60,000. With a range around 300 miles, it can refuel at a hydrogen station in about five minutes.

Although Toyota’s move Monday will help advance the development of hydrogen fuel cell vehicles, the automaker may not be sacrificing much in making its patents available.

“I don’t think the technology that Toyota has is that groundbreaking,” said David Cole, head of AutoHarvest Foundation, a nonprofit at Wayne State University in Detroit, and chairman emeritus of the Center for Automotive Research. “It’s not a patent issue.”


Instead, the development of cost-effective hydrogen fuel cell vehicles has been stymied by the high cost of research and development, and by a shortage of brainpower necessary to figure out how to make the hydrogen fuel itself more energy dense, and therefore more efficient, Cole said.

 This is one reason fiercely competitive automakers are eager to work together on fuel cell technologies. Honda, with its own hydrogen fuel cell car set for 2016, has partnered with General Motors on new fuel cell applications. Both companies lead the industry in fuel cell patents.

And Ford, Renault, Nissan, and Mercedes’ parent company, Daimler, recently agreed to develop fuel cell technologies that all four companies would share.

“It’s historic the amount of collaboration that’s occurring,” Cole said. “If automakers don’t, we’re not going to get down the fuel cell road as far and as fast as we like.”

Toyota says it's been developing its fuel cell technology for the last 20 years. But Toyota knows that it can't sway the industry toward the widespread adoption of hydrogen as a fuel source alone.

"We believe that hydrogen electric will be the primary fuel for the next 100 years," Carter said. "Now, it’s not going to happen overnight. By eliminating the traditional corporate boundaries, we can speed the metabolism of everyone’s research and development and move into a future of mobility quicker more effecively and more economically."

Meanwhile, Hyundai has a hydrogen version of its Tucson crossover available for lease, and brands such as Volkswagen, Audi, and BMW have all shown prototypes and concept versions of hydrogen vehicles.

 Automakers relish the idea of hydrogen fuel cell vehicles for several reasons. Many consumers still have the psychological “range anxiety” regarding pure electric cars, despite claims by brands selling electric cars that a driver’s typical commute is far shorter than the maximum  range.

Fuel cell vehicles have a much longer range -- 300 to 400 miles is typical -- and can refill in a matter of minutes. Yet the smooth, quiet drivetrains of a fuel cell vehicle are very similar to an electric car. The key difference is the power source.

Rather than draw power directly from rechargeable batteries, the electric drivetrain in a hydrogen fuel cell combines hydrogen with oxygen inside a fuel cell, which creates electricity and emits only water vapor as “waste.”

Those clean emissions make fuel cell vehicles zero emissions in the eyes of state and federal governments. This is another reason automakers are drawn to the promise of fuel cells.

By 2025, the state of California wants 1.5 million zero-emission vehicles on the road and 15% of vehicles sold to be zero emission. This includes EVs, plug-in hybrids with limited electric-only ranges, and hydrogen vehicles. (Critics note that the process of manufacturing and distributing hydrogen does create some toxic emissions.)


But limited infrastructure has remained a key hurdle for automakers. At the 2014 L.A. Auto Show in November, VW debuted a version of its upcoming Golf SportWagen that runs on hydrogen.

Yet in launching the concept, the automaker cautioned, “Before the market launch a hydrogen infrastructure would have to be created: Not only a broad network of hydrogen fuel stations, but also the production of the hydrogen itself.”

Currently, California has only 11 hydrogen refueling stations, though some analysts say the total could hit 40 within a year. But Toyota says it's looking big picture.

"This isn’t a six-month or five-year play," Carter said. "This is where we see the automobile industry going for the next 100 years."

Android RoadMate: a GPS Unit Built Specifically for Truckers

As reported by the Android Authority: Magellan may not normally be the name that comes to mind when we think of leading GPS technologies, but the company has a pretty nice announcement for us at this year’s CES 2015. The navigation company has just announced their GPS unit for truckers. Officially dubbed the RoadMate RC9485T-LMB, it’s designed specifically for truckers and commercial drivers, and aims to provide a better navigation experience for those going on long trips.

The RoadMate allows for multiple driver sign-in support, as well as customizable routes and truck preferences. Atop all of these already handy features, the GPS device offers:
  • Multiple-Stop Routing lets drivers plan their trip with multiple stops in the order they want or automatically optimizes for the most efficient route, helping to save time and money.
  • Free Lifetime Traffic Alerts, sent directly to their GPS unit, lets users plan more precise travel times and ETAs by avoiding traffic jams and other delays.
  • Junction View displays a realistic image of the road and highway signs to help guide drivers to the correct lane that the vehicle needs to be in for safe merging and exiting.
  • Landmark Guidance gives users an easier way to navigate to their destinations by telling them to turn at familiar landmarks, such as gas stations, stores or other large, easily-seen places instead of only street names that may be hard to locate and/or read.
  • Highway Lane Assist helps when navigating complex highway interchanges, ensuring that a driver stays on the correct roadway.
  • Exit POIs indicate where truck stops, food, lodging, rest areas, and weigh stations are located at an approaching exit. Integrated Bluetooth 4.0 wireless technology allows drivers to safely talk hands free on compatible Bluetooth phones.
The Magellan RoadMate will be available to the public sometime in Q1 of 2015, and will retail starting at $299.99. We wouldn’t doubt it if we saw these devices on the road everywhere within the coming months.

Monday, January 5, 2015

NVIDIA Unveils Automotive Computing Platforms

As reported by Hot HardwareNVIDIA CEO Jen Hsun Huang hosted a press conference at the Four Seasons Hotel in Las Vegas this evening, to officially kick off the company’s Consumer Electronics Show activities. Jen Hsun began the press conference with a bit of back story on the Tegra K1 and how it took NVIDIA approximately 2 years to get Kepler-class graphics into Tegra, but that it was able to cram a Maxwell-derived GPU into the just announced Tegra X1 in just a few months. We’ve got more details regarding the Tegra X1 in this post, complete with pictures of the chip and reference platform, game demos, benchmarks and video of the Tegra X1 in action with a variety of workloads, including 4K 60 FPS video playback.

Over and above what we talked about in our hands-on with the Tegra X1, Jen Hsun showed a handful of demos powered by the chip. In a demo featuring the Unreal Engine 4, NVIDIA showed the Tegra X1—in a roughly 10 watt power envelope—running the Unreal Engine 4 Elemental demo. The Maxwell-based GPU in the SoC not only has the horsepower to run such a complex graphics demo, but the features and API support to render some of the more complex effects. Jen Hsun’s main call out with the demo was that this same demo was used to showcase the Xbox One last year, but the Xbox One consumes roughly 10x the power. Note that a 10 watt Tegra X1 would likely be clocked much higher than the version of the chip that will find its way into tablets.




NVIDIA CEO Jen-Hsun Huang Delivers Tegra X1 Unveil At CES 2015

Jen Hsun also disclosed that the Tegra X1 has FP16 support and is capable of just over 1TFLOPS of compute performance. Jen Hsun said that kind of performance isn’t necessary for smartphones at this point, but went on to talk about a number of automotive-related applications and rich auto displays that could leverage the Tegra X1’s capabilities. NVIDIA’s CEO then unveiled the NVIDIA Drive CX Digital Cockpit Computer featuring the Tegra X1. The Drive CX can push up to a 16.6Mpixel max resolution, which is equivalent to roughly two 4K displays. But keep in mind that all of those pixels don’t have to reside on a single display—multiple displays can be used to add touch-screens to different area in the car or power back-seat entertainment systems with individual screens, etc.


NVIDIA Drive CX

The NVIDIA Drive CX is complemented by some new software dubbed NVIDIA Drive Studio, which is a design suite meant for developing in-car infotainment systems. The NVIDIA Drive Studio software suite encompasses everything from multi-media playback, navigation, text to speech, climate control, and anything else necessary for automotive applications. In a demo showing the Drive CX and Studio software in action, Jen Hsun showed a basic media player on-screen with a fully-3D navigation systems, with a Tron-like theme, complete with accurate lighting, ambient occlusion, GPU rendered vectors, and other advanced effects. The demo also included full Android running “in the car”, a surround-view camera system, and a customizable high-resolution digital cluster system, using physically based rendering. The graphics fidelity offered by the Drive CX system was excellent, and clearly superior to anything we’ve seen before with other in-car infotainment systems.


The automotive-related talk then evolved into a discussion regarding autonomous driving cards, environmental and situational awareness, path-finding, and learning. Jen Hsun then unveiled the NVIDIA Drive PX Auto-Pilot platform, which is powered by not one, but two Tegra X1 chips. The Tegra X1s on in tandem or in a redundant configuration, can connect to up to 12 high-definition cameras, and can process up to 1.3Gpixels/s. The dual Tegra X1 chips offer up to 2.3 TFLOPS of compute performance, can record dual 4K streams at 30Hz, and leverage a technology NVIDIA is calling Deep Neural Network Computer Vision.


At a high-level, the NVIDIA Drive PX works like this: Camera data is brought into the Drive PX through a crossbar, and data is then fed to the correct block inside the platform for whatever workload is prescribed. The Drive PX then uses GPU accelerated “deep learning” to do things like identify objects, i.e. computer vision, and assess situations and environments. Bits of data reside in what amount to “neurons”, which are all linked by “synapses”, and the network is trained to compare and compile those bits of data to learn what they actually are. These neural networks, for example, may contain bits of data of headlights, wheels, geometric shapes, etc., which when combined tell the neural network its seeing a car. The bits of data could be body parts like arms, legs, and a torso, to detect humans.

NVIDIA then showed a demo of the Drive PX platform in action, after only a few weeks of training. The demo showed the setup detecting crosswalk signs, to identify areas with high pedestrian traffic. They also showed speed limit-sign detection and pedestrian detection. NVIDIA also showed the Drive PX doing more difficult detection, however, of things like occluded pedestrians (say, if someone is walking between cars) and of the platform reading signs in poorly lit, nighttime environments. The Drive PX was so precise, it was even able to detect and alert the driver to upcoming traffic cameras, brake lights, and congestion. We should also mention that these demos weren’t exclusive to detecting singular things—the platform detected many things simultaneously and was able to alert drivers to upcoming traffic and police cars (or other emergency vehicles) coming from behind. It is even smart enough to detect different vehicle types and situations to make specific driving recommendations. For example, if a work truck is detected ahead at the side of the road, the driver could be altered to move over.


To quantify the Tegra X1s performance in the context of neural networks and computer vision, Jen Hsun also talked about the AlexNet test, which uses ImageNet classification with deep convolutional neural networks for object detection. The test uses 60 million parameters and 650,000 neurons to classify 1000 different items. When running the test, the Tegra X1 is able to recognize 30 images per second. For comparison, the Tegra K1 could only manage about 12 images per second.

There was no GeForce news from NVIDIA just yet, but CES hasn’t officially started.

Drone Designed to Fly Life-Rings to Distressed Swimmers

As reported by Gizmag: The speed that drones can be deployed makes them ideal for delivering items when time is of the essence. The Ambulance Drone and Defikopter, for example, are used for transporting defibrillators to those in need. Now, Project Ryptide plans to use drones to deliver life-rings to swimmers in distress.

Unlike the similar Pars aerial robot, the Ryptide is not actually a drone itself. It's an attachment designed to be installed on a drone and carry a folded, inflatable life-ring. When the drone has been flown to a location above the distressed swimmer, a button on the drone controller can be pressed to remotely release the life-ring. When the life-ring hits the water, a salt tablet dissolves allowing a spring pin to pierce a CO2 cartridge and the life-ring to inflate in about 3 seconds.

The project, which is at pre-production prototype stage, was conceived by Bill Piedra, a part-time teacher at the King Low Heywood Thomas (KLHT) school in Stamford, Connecticut. Piedra began working on the design in January 2014 and then began developing it further with students at KLHT in September 2014.

"Ryptide was designed so that anyone can be a lifeguard," Piedra tells Gizmag. "We had the casual user in mind when we designed the basic model; someone that might take their drone to the beach, boating, a lake, or even ice skating. It could be useful in the case of someone falling through the ice while skating, for example."

There will be three different versions of the Ryptide. The basic model is designed to attach to most small drones with no tools required and weighs 420 g (14.8 oz). The multi-ring model can carry up to four life-rings that can be dropped one at a time and weighs in at heavier 890 g (31.4 oz). The final version will carry four life-rings as well as a camera.

The life-rings used by the Ryptide are reusable and can be "recharged" using a kit that will be available with the attachment. Piedra says the life-rings are SOLAS Approved (International Convention for the Safety of Life at Sea), with United States Coast Guard Academy (USCGA) approval pending.

A crowdfunding campaign for Project Ryptide is expected to be launched on Kickstarter this month. The targeted funds will be used to build and market the system.



Saturday, January 3, 2015

Researchers Use GPS to Track Antarctica's Ice Migration in Real Time

As reported by GizmodoAntarctica's melting ice sheets have been a major contributor to global sea level increases over the last decade, and the losses are expected to accelerate over the next two centuries. But researchers attempting to study the rate at which these sheets move and melt have been hamstrung by conventional monitoring methods. That's why a team from the UBL's Laboratoire de Glaciologie has gone ahead and connected one such ice sheet to the Internet of Things.
Conventional methods of monitoring the rate at which ice sheets slowly slip into the sea (and calve off into icebergs) rely on readings from passing satellites, which can only provide snapshots of the sheet's movement. To obtain a more accurate and timely understanding of the situation, researchers from the UBL have installed a series of GPS sensors and phase-sensitive radar along the Roi Baudouin ice shelf in Dronning Maud Land, East Antarctica. These devices will monitor the sheet's shifts in real-time, providing climatologists with daily, not weekly, updates. What's more, that data is also delivered to the project's Twitter feed,@TweetingIceShelf, and broadcast across the Internet.
Earlier this month researchers installed three GPS sensors along a 15 meter-deep depression in the ice shelf. This depression was caused by ice that had already slipped off the underlying bedrock into the ocean, melting from the bottom up and forming massive subsurface cavities. The GPS sensors record their relative positions hourly and upload that data twice daily using a satellite-phone data-link.
Researchers Use GPS to Track Antarctica's Ice Migration in Real TimeEXPAND
the pRES radar prior to being buried in the Antarctic snow
Additionally, the research team also installed a phase-sensitive radar array, along the same depression in order to better monitor any changes to the shelf's internal structure—the growth of those 150-meter deep subsurface cavities, for instance. As the project's website explains,
A radar signal is transmitted through the ice and reflects off the contact with the ocean. The second antenna receives the reflected signal that has been attenuated while going through ice impurities and denser layers of ice. The phase sensitive radar (or pRES) is capable of detecting changes in the position of these layers. So, we will be capable of measuring the internal flow of the ice shelf. But we will also be capable detecting changes at the contact with the ice shelf, whether there is melting, how much and when precisely.
The project, dubbed BELARE (Belgian Antarctic Research Project), is expected to run through next December.