Search This Blog

Tuesday, January 6, 2015

CES 2015: Toyota Opens Patents on Hydrogen Fuel Cell Technology

As reported by the LA Times: Hoping to speed development of hydrogen fuel cell vehicles, Toyota said Monday that it would offer thousands of patents on related technologies to rival automakers, for free.

The announcement, made at the annual Consumer Electronics Show in Las Vegas, echoes a similar move by electric car maker Tesla in 2014, when Chief Executive Elon Musk made Tesla patents available to all, hoping to spur innovation in the electric vehicle world (and, perhaps, to draw publicity.)

Toyota has similar goals for the fuel-cell car market.

“At Toyota, we believe that when good ideas are shared, great things can happen,” Bob Carter, senior vice president at Toyota, said before the announcement. “The first generation hydrogen fuel cell vehicles, launched between 2015 and 2020, will be critical, requiring a concerted effort and unconventional collaboration.”

Toyota will make 5,680 patents available to automakers to build and sell their own fuel cell vehicles. Parts suppliers, energy companies and bus manufacturers can also use the patents, which remain royalty-free through 2020.

And 70 patents are directly related to hydrogen fueling stations, a move both Toyota and analysts say could spur the wider adoption of hydrogen electric vehicles.

"I think overall it makes sense," said Devin Lindsay, principle powertrain anaylst with IHS Automotive. "Right now the automakers all need to help each other, and more infrastructure is going to help kick-start the industry."

The patents also relate to Toyota’s upcoming Mirai hydrogen fuel cell car, which is slated to hit the U.S. market in October and is already on sale in Japan for the equivalent of about $60,000. With a range around 300 miles, it can refuel at a hydrogen station in about five minutes.

Although Toyota’s move Monday will help advance the development of hydrogen fuel cell vehicles, the automaker may not be sacrificing much in making its patents available.

“I don’t think the technology that Toyota has is that groundbreaking,” said David Cole, head of AutoHarvest Foundation, a nonprofit at Wayne State University in Detroit, and chairman emeritus of the Center for Automotive Research. “It’s not a patent issue.”


Instead, the development of cost-effective hydrogen fuel cell vehicles has been stymied by the high cost of research and development, and by a shortage of brainpower necessary to figure out how to make the hydrogen fuel itself more energy dense, and therefore more efficient, Cole said.

 This is one reason fiercely competitive automakers are eager to work together on fuel cell technologies. Honda, with its own hydrogen fuel cell car set for 2016, has partnered with General Motors on new fuel cell applications. Both companies lead the industry in fuel cell patents.

And Ford, Renault, Nissan, and Mercedes’ parent company, Daimler, recently agreed to develop fuel cell technologies that all four companies would share.

“It’s historic the amount of collaboration that’s occurring,” Cole said. “If automakers don’t, we’re not going to get down the fuel cell road as far and as fast as we like.”

Toyota says it's been developing its fuel cell technology for the last 20 years. But Toyota knows that it can't sway the industry toward the widespread adoption of hydrogen as a fuel source alone.

"We believe that hydrogen electric will be the primary fuel for the next 100 years," Carter said. "Now, it’s not going to happen overnight. By eliminating the traditional corporate boundaries, we can speed the metabolism of everyone’s research and development and move into a future of mobility quicker more effecively and more economically."

Meanwhile, Hyundai has a hydrogen version of its Tucson crossover available for lease, and brands such as Volkswagen, Audi, and BMW have all shown prototypes and concept versions of hydrogen vehicles.

 Automakers relish the idea of hydrogen fuel cell vehicles for several reasons. Many consumers still have the psychological “range anxiety” regarding pure electric cars, despite claims by brands selling electric cars that a driver’s typical commute is far shorter than the maximum  range.

Fuel cell vehicles have a much longer range -- 300 to 400 miles is typical -- and can refill in a matter of minutes. Yet the smooth, quiet drivetrains of a fuel cell vehicle are very similar to an electric car. The key difference is the power source.

Rather than draw power directly from rechargeable batteries, the electric drivetrain in a hydrogen fuel cell combines hydrogen with oxygen inside a fuel cell, which creates electricity and emits only water vapor as “waste.”

Those clean emissions make fuel cell vehicles zero emissions in the eyes of state and federal governments. This is another reason automakers are drawn to the promise of fuel cells.

By 2025, the state of California wants 1.5 million zero-emission vehicles on the road and 15% of vehicles sold to be zero emission. This includes EVs, plug-in hybrids with limited electric-only ranges, and hydrogen vehicles. (Critics note that the process of manufacturing and distributing hydrogen does create some toxic emissions.)


But limited infrastructure has remained a key hurdle for automakers. At the 2014 L.A. Auto Show in November, VW debuted a version of its upcoming Golf SportWagen that runs on hydrogen.

Yet in launching the concept, the automaker cautioned, “Before the market launch a hydrogen infrastructure would have to be created: Not only a broad network of hydrogen fuel stations, but also the production of the hydrogen itself.”

Currently, California has only 11 hydrogen refueling stations, though some analysts say the total could hit 40 within a year. But Toyota says it's looking big picture.

"This isn’t a six-month or five-year play," Carter said. "This is where we see the automobile industry going for the next 100 years."

Android RoadMate: a GPS Unit Built Specifically for Truckers

As reported by the Android Authority: Magellan may not normally be the name that comes to mind when we think of leading GPS technologies, but the company has a pretty nice announcement for us at this year’s CES 2015. The navigation company has just announced their GPS unit for truckers. Officially dubbed the RoadMate RC9485T-LMB, it’s designed specifically for truckers and commercial drivers, and aims to provide a better navigation experience for those going on long trips.

The RoadMate allows for multiple driver sign-in support, as well as customizable routes and truck preferences. Atop all of these already handy features, the GPS device offers:
  • Multiple-Stop Routing lets drivers plan their trip with multiple stops in the order they want or automatically optimizes for the most efficient route, helping to save time and money.
  • Free Lifetime Traffic Alerts, sent directly to their GPS unit, lets users plan more precise travel times and ETAs by avoiding traffic jams and other delays.
  • Junction View displays a realistic image of the road and highway signs to help guide drivers to the correct lane that the vehicle needs to be in for safe merging and exiting.
  • Landmark Guidance gives users an easier way to navigate to their destinations by telling them to turn at familiar landmarks, such as gas stations, stores or other large, easily-seen places instead of only street names that may be hard to locate and/or read.
  • Highway Lane Assist helps when navigating complex highway interchanges, ensuring that a driver stays on the correct roadway.
  • Exit POIs indicate where truck stops, food, lodging, rest areas, and weigh stations are located at an approaching exit. Integrated Bluetooth 4.0 wireless technology allows drivers to safely talk hands free on compatible Bluetooth phones.
The Magellan RoadMate will be available to the public sometime in Q1 of 2015, and will retail starting at $299.99. We wouldn’t doubt it if we saw these devices on the road everywhere within the coming months.

Monday, January 5, 2015

NVIDIA Unveils Automotive Computing Platforms

As reported by Hot HardwareNVIDIA CEO Jen Hsun Huang hosted a press conference at the Four Seasons Hotel in Las Vegas this evening, to officially kick off the company’s Consumer Electronics Show activities. Jen Hsun began the press conference with a bit of back story on the Tegra K1 and how it took NVIDIA approximately 2 years to get Kepler-class graphics into Tegra, but that it was able to cram a Maxwell-derived GPU into the just announced Tegra X1 in just a few months. We’ve got more details regarding the Tegra X1 in this post, complete with pictures of the chip and reference platform, game demos, benchmarks and video of the Tegra X1 in action with a variety of workloads, including 4K 60 FPS video playback.

Over and above what we talked about in our hands-on with the Tegra X1, Jen Hsun showed a handful of demos powered by the chip. In a demo featuring the Unreal Engine 4, NVIDIA showed the Tegra X1—in a roughly 10 watt power envelope—running the Unreal Engine 4 Elemental demo. The Maxwell-based GPU in the SoC not only has the horsepower to run such a complex graphics demo, but the features and API support to render some of the more complex effects. Jen Hsun’s main call out with the demo was that this same demo was used to showcase the Xbox One last year, but the Xbox One consumes roughly 10x the power. Note that a 10 watt Tegra X1 would likely be clocked much higher than the version of the chip that will find its way into tablets.




NVIDIA CEO Jen-Hsun Huang Delivers Tegra X1 Unveil At CES 2015

Jen Hsun also disclosed that the Tegra X1 has FP16 support and is capable of just over 1TFLOPS of compute performance. Jen Hsun said that kind of performance isn’t necessary for smartphones at this point, but went on to talk about a number of automotive-related applications and rich auto displays that could leverage the Tegra X1’s capabilities. NVIDIA’s CEO then unveiled the NVIDIA Drive CX Digital Cockpit Computer featuring the Tegra X1. The Drive CX can push up to a 16.6Mpixel max resolution, which is equivalent to roughly two 4K displays. But keep in mind that all of those pixels don’t have to reside on a single display—multiple displays can be used to add touch-screens to different area in the car or power back-seat entertainment systems with individual screens, etc.


NVIDIA Drive CX

The NVIDIA Drive CX is complemented by some new software dubbed NVIDIA Drive Studio, which is a design suite meant for developing in-car infotainment systems. The NVIDIA Drive Studio software suite encompasses everything from multi-media playback, navigation, text to speech, climate control, and anything else necessary for automotive applications. In a demo showing the Drive CX and Studio software in action, Jen Hsun showed a basic media player on-screen with a fully-3D navigation systems, with a Tron-like theme, complete with accurate lighting, ambient occlusion, GPU rendered vectors, and other advanced effects. The demo also included full Android running “in the car”, a surround-view camera system, and a customizable high-resolution digital cluster system, using physically based rendering. The graphics fidelity offered by the Drive CX system was excellent, and clearly superior to anything we’ve seen before with other in-car infotainment systems.


The automotive-related talk then evolved into a discussion regarding autonomous driving cards, environmental and situational awareness, path-finding, and learning. Jen Hsun then unveiled the NVIDIA Drive PX Auto-Pilot platform, which is powered by not one, but two Tegra X1 chips. The Tegra X1s on in tandem or in a redundant configuration, can connect to up to 12 high-definition cameras, and can process up to 1.3Gpixels/s. The dual Tegra X1 chips offer up to 2.3 TFLOPS of compute performance, can record dual 4K streams at 30Hz, and leverage a technology NVIDIA is calling Deep Neural Network Computer Vision.


At a high-level, the NVIDIA Drive PX works like this: Camera data is brought into the Drive PX through a crossbar, and data is then fed to the correct block inside the platform for whatever workload is prescribed. The Drive PX then uses GPU accelerated “deep learning” to do things like identify objects, i.e. computer vision, and assess situations and environments. Bits of data reside in what amount to “neurons”, which are all linked by “synapses”, and the network is trained to compare and compile those bits of data to learn what they actually are. These neural networks, for example, may contain bits of data of headlights, wheels, geometric shapes, etc., which when combined tell the neural network its seeing a car. The bits of data could be body parts like arms, legs, and a torso, to detect humans.

NVIDIA then showed a demo of the Drive PX platform in action, after only a few weeks of training. The demo showed the setup detecting crosswalk signs, to identify areas with high pedestrian traffic. They also showed speed limit-sign detection and pedestrian detection. NVIDIA also showed the Drive PX doing more difficult detection, however, of things like occluded pedestrians (say, if someone is walking between cars) and of the platform reading signs in poorly lit, nighttime environments. The Drive PX was so precise, it was even able to detect and alert the driver to upcoming traffic cameras, brake lights, and congestion. We should also mention that these demos weren’t exclusive to detecting singular things—the platform detected many things simultaneously and was able to alert drivers to upcoming traffic and police cars (or other emergency vehicles) coming from behind. It is even smart enough to detect different vehicle types and situations to make specific driving recommendations. For example, if a work truck is detected ahead at the side of the road, the driver could be altered to move over.


To quantify the Tegra X1s performance in the context of neural networks and computer vision, Jen Hsun also talked about the AlexNet test, which uses ImageNet classification with deep convolutional neural networks for object detection. The test uses 60 million parameters and 650,000 neurons to classify 1000 different items. When running the test, the Tegra X1 is able to recognize 30 images per second. For comparison, the Tegra K1 could only manage about 12 images per second.

There was no GeForce news from NVIDIA just yet, but CES hasn’t officially started.

Drone Designed to Fly Life-Rings to Distressed Swimmers

As reported by Gizmag: The speed that drones can be deployed makes them ideal for delivering items when time is of the essence. The Ambulance Drone and Defikopter, for example, are used for transporting defibrillators to those in need. Now, Project Ryptide plans to use drones to deliver life-rings to swimmers in distress.

Unlike the similar Pars aerial robot, the Ryptide is not actually a drone itself. It's an attachment designed to be installed on a drone and carry a folded, inflatable life-ring. When the drone has been flown to a location above the distressed swimmer, a button on the drone controller can be pressed to remotely release the life-ring. When the life-ring hits the water, a salt tablet dissolves allowing a spring pin to pierce a CO2 cartridge and the life-ring to inflate in about 3 seconds.

The project, which is at pre-production prototype stage, was conceived by Bill Piedra, a part-time teacher at the King Low Heywood Thomas (KLHT) school in Stamford, Connecticut. Piedra began working on the design in January 2014 and then began developing it further with students at KLHT in September 2014.

"Ryptide was designed so that anyone can be a lifeguard," Piedra tells Gizmag. "We had the casual user in mind when we designed the basic model; someone that might take their drone to the beach, boating, a lake, or even ice skating. It could be useful in the case of someone falling through the ice while skating, for example."

There will be three different versions of the Ryptide. The basic model is designed to attach to most small drones with no tools required and weighs 420 g (14.8 oz). The multi-ring model can carry up to four life-rings that can be dropped one at a time and weighs in at heavier 890 g (31.4 oz). The final version will carry four life-rings as well as a camera.

The life-rings used by the Ryptide are reusable and can be "recharged" using a kit that will be available with the attachment. Piedra says the life-rings are SOLAS Approved (International Convention for the Safety of Life at Sea), with United States Coast Guard Academy (USCGA) approval pending.

A crowdfunding campaign for Project Ryptide is expected to be launched on Kickstarter this month. The targeted funds will be used to build and market the system.



Saturday, January 3, 2015

Researchers Use GPS to Track Antarctica's Ice Migration in Real Time

As reported by GizmodoAntarctica's melting ice sheets have been a major contributor to global sea level increases over the last decade, and the losses are expected to accelerate over the next two centuries. But researchers attempting to study the rate at which these sheets move and melt have been hamstrung by conventional monitoring methods. That's why a team from the UBL's Laboratoire de Glaciologie has gone ahead and connected one such ice sheet to the Internet of Things.
Conventional methods of monitoring the rate at which ice sheets slowly slip into the sea (and calve off into icebergs) rely on readings from passing satellites, which can only provide snapshots of the sheet's movement. To obtain a more accurate and timely understanding of the situation, researchers from the UBL have installed a series of GPS sensors and phase-sensitive radar along the Roi Baudouin ice shelf in Dronning Maud Land, East Antarctica. These devices will monitor the sheet's shifts in real-time, providing climatologists with daily, not weekly, updates. What's more, that data is also delivered to the project's Twitter feed,@TweetingIceShelf, and broadcast across the Internet.
Earlier this month researchers installed three GPS sensors along a 15 meter-deep depression in the ice shelf. This depression was caused by ice that had already slipped off the underlying bedrock into the ocean, melting from the bottom up and forming massive subsurface cavities. The GPS sensors record their relative positions hourly and upload that data twice daily using a satellite-phone data-link.
Researchers Use GPS to Track Antarctica's Ice Migration in Real TimeEXPAND
the pRES radar prior to being buried in the Antarctic snow
Additionally, the research team also installed a phase-sensitive radar array, along the same depression in order to better monitor any changes to the shelf's internal structure—the growth of those 150-meter deep subsurface cavities, for instance. As the project's website explains,
A radar signal is transmitted through the ice and reflects off the contact with the ocean. The second antenna receives the reflected signal that has been attenuated while going through ice impurities and denser layers of ice. The phase sensitive radar (or pRES) is capable of detecting changes in the position of these layers. So, we will be capable of measuring the internal flow of the ice shelf. But we will also be capable detecting changes at the contact with the ice shelf, whether there is melting, how much and when precisely.
The project, dubbed BELARE (Belgian Antarctic Research Project), is expected to run through next December. 

Tuesday, December 30, 2014

Livemap Lands $300K Grant For Its Motorcycle Helmet With Built-In Navigation

As reported by TechCrunchAs we’re coming up on the next Consumer Electronics Show, I got an update from one of the companies that participated in TechCrunch’s Hardware Battlefield at the last CES — Russian startup Livemap.

The Livemap team is working to create motorcycle helmets with voice control and GPS navigation directly in your field of vision — so while you’re riding, you can see directions in your helmet display without having to fiddle with another device or look away from the road. (Back in January, the Livemap team demonstrated an early version of their display, which was transparent enough to show a map without obscuring the road ahead.)

CEO Andrew Artishchev told me via email that most of the past year has been spent building the pre-production prototype of Livemap’s optics. Those optics will be built entirely of aspheric lenses, allowing the helmet to, in his words, be “smaller and lighter and sometimes cheaper than the multi-lens design.” He added that the other big focus has been creating a design that will keep the optics costs down.
Now Livemap plans to unveil its prototype in the spring, and to start sales this summer in its first market, the United States.

To help create the prototype, Livemap has also received a grant of 14.7 million rubles from the Russian Ministry of Science. (That’s a little under $300,000 in U.S. dollars.). If you’re fluent in Russian or don’t mind using Google Translate, you can read more about the grant here.
Artishchev also commented on the emergence of a new competitor, Skully, which he dismissed as “only part of Google Glass.”
“The product called Skully P1 is, in short words, like Google Glass put into a helmet — with all its disadvantages like tiny screen, low saturation and contrast, low resolution,” he added.

Monday, December 29, 2014

GPS III and OCX Successfully Demonstrate Key Satellite Command and Control Capabilities

As reported by Space Flight Insider: Lockheed Martin and Raytheon Company successfully completed the fourth of five planned launch and early orbit exercises to demonstrate new automation capabilities, information assurance and launch readiness of the world’s most powerful and accurate Global Positioning System (GPS), the U.S. Air Force’s next generation GPS III satellite and Operational Control System (OCX).

Successful completion of Exercise 4 on Oct. 3 represents a key milestone demonstrating the end-to-end capability to automatically transfer data between Raytheon’s OCX and Lockheed Martin’s GPS III satellite. One additional readiness exercise, five launch rehearsals and a mission dress rehearsal are planned prior to launch of the first GPS III satellite with OCX.

The exercise used the latest baseline of Raytheon’s OCX Launch Checkout System (LCS) software featuring integrated information assurance functionality for the first time, and the latest version of Lockheed Martin’s GPS III satellite simulator. Exercise 4 successfully demonstrated mission planning and scheduling capabilities with the simulated Air Force Satellite Control Network (AFSCN) for the first time, including a replan scenario that would occur in the event of a launch slip.

The system also automatically generated antenna pointing angles for the simulated AFSCN, which until now have been manually generated. Exercise 4 expands on three previous exercises, introducing maneuver planning and reconstruction capabilities, as well as advanced planning and scheduling with AFSCN assets. The automation of these capabilities will allow GPS operators to spend their time optimizing system performance rather than focusing on routine operations.

“As part of establishing the LCS Block 0 baseline, the completion of Exercise 4 demonstrates the capability of OCX to successfully support a GPS-III satellite launch in an information assurance hardened environment,” stated Matthew Gilligan, Raytheon vice president and GPS OCX program manager. “Exercise 4 began the instantiation of vital OCX automation capabilities that give operators their time back in order to focus on mission critical activities, one of the important elements of a modernized GPS.”

“Launch Exercise 4 demonstrated the team’s ability to complete nearly 100 percent of the GPS III space vehicle 1 launch and early orbit mission sequence,” said Mark Stewart, vice president for Lockheed Martin’s Navigation Systems mission area. “The findings the team made during this robust launch exercise will help mature the processes, procedures and tools necessary to enter our rehearsal phase and, ultimately, the launch and checkout mission.”

GPS III satellites will deliver three times better accuracy, provide up to eight times improved anti-jamming capabilities, and include enhancements that extend spacecraft life to 15 years, 25 percent longer than the newest Block IIF satellites. GPS III will be the first generation of GPS satellite with a new L1C civil signal designed to make it interoperable with other international global navigation satellite systems. The first GPS III satellite is currently undergoing integration and testing, with final space vehicle delivery planned for late 2015.

OCX is being developed in two blocks using a commercial best practice iterative software development process, with seven iterations in Block 1 and one iteration in Block 2. Exercise 4 was conducted using the recently completed Iteration 1.5 software, representing an early delivery of the final software baseline. Exercise 5, scheduled for 2015, will include critical information assurance features needed to support launch of the first GPS III satellite.

The GPS III team is led by the Global Positioning Systems Directorate at the U.S. Air Force Space and Missile Systems Center. Air Force Space Command’s 2nd Space Operations Squadron (2SOPS), based at Schriever Air Force Base, Colorado, manages and operates the GPS constellation for both civil and military users.

About Lockheed Martin Headquartered in Bethesda, Maryland, Lockheed Martin is a global security and aerospace company that employs approximately 113,000 people worldwide and is principally engaged in the research, design, development, manufacture, integration and sustainment of advanced technology systems, products and services. The Corporation’s net sales for 2013 were $45.4 billion.

About Raytheon Raytheon Company, with 2013 sales of $24 billion and 63,000 employees worldwide, is a technology and innovation leader specializing in defense, security and civil markets throughout the world. With a history of innovation spanning 92 years, Raytheon provides state-of-the-art electronics, mission systems integration and other capabilities in the areas of sensing; effects; and command, control, communications and intelligence systems, as well as cyber security and a broad range of mission support services. Raytheon is headquartered in Waltham, Massachusetts.