Search This Blog

Friday, May 9, 2014

BMW Unveils the Solar Charging Carport of the Future

As reported by Motor Authority: So you have a new BMW i3 or i8 in your driveway, and you’re loving the freedom the electric-drive capability gives you—but you’d like to be even greener? BMW DesignworksUSA has the answer with its stunningly simple, high-tech solar carport.

It’s still a concept design at this stage, but the bamboo and carbon fiber structure just begs to be built. Supported atop the structure is an array of solar panels that harvest the sun’s energy and store it in you BMW i-vehicle.

In addition to being greener than charging from the grid, the solar energy carport allows the BMW i owner to be more self-sufficient in their energy supply. To harness the energy from the solar panels, a BMW i Wallbox Pro is needed. Once integrated, the carport and Wallbox Pro can then directly charge either the i3 or i8. With the Wallbox Pro’s features, excess solar energy not needed to charge the car can be used by the connected house.

“With the solar carport concept we opted for a holistic approach: not only is the vehicle itself sustainable, but so is its energy supply,” explained Tom Allemann of BMW Designworks USA. “This is therefore an entirely new generation of carports that allows energy to be produced in a simple and transparent way. It renders the overarching theme of lightweight design both visible and palpable.”


BMW’s beautiful, functional solar carport certainly complements the ethos behind the company’s i brand. Here’s hoping BMW decides to offer it as an optional upgrade to i owners in the near future.
 

Thursday, May 8, 2014

Should the Feds Drive Smart Cars - Making Use of Telematics?

As reported by the Washington Business Journal: Sure, it’s a little Big Brother. But with telematics, government could save a whole bunch of money — and industry could sell a lot of cool technologies.

Here's what telematics — which combines telecommunications and information processing to send, receive, and store information related to vehicle fleets — would allow agencies to do: monitor not only if its employees are speeding in government-leased cars, but whether they’re doing too many rapid starts or sudden stops, or even idling too much. They’ll be able to see if the employee is riding the brakes. They’ll be able to see whether they’re driving reckless — and even how the employee handled the car right before an accident.

Little creepy? Maybe. But the technology could save the government a lot of money, the Government Accountability Office said in a new report. Fleet managers could essentially provide tips for drivers on being more fuel-efficient, for example. They can take privileges away from those that aren't following the rules of the road and risking car crashes. They could even use data to defend employees against frivolous lawsuits by people claiming a federal worker was at fault in an accident. 

On top of that, agencies could analyze precise utilization rates of vehicles in their fleets and eliminate cars or trucks that don’t get used. They could see the remaining brake pad depth and engine diagnostics (yes, this technology gets that granular) to determine when maintenance is needed.

There are examples of savings already achieved by agencies, in fact. A fleet manager at Idaho National Laboratory reported that telematics data contributed to the decision to eliminate 65 vehicles since fiscal 2011, with an estimated average annual savings of about $390,000.

So then, with telematics offered as an option by the General Services Administration on the vehicles that it leases to agencies, why aren’t more using it? For one thing, certain costs that telematics would save are not their problem. For example, fuel costs are covered by a monthly fee, which is based primarily on miles traveled, so being more fuel-efficient isn’t really on their radar.

At the same time, telematics costs money. How much varies depending on what type of technologies are installed and how.

Some might be installed by the manufacturer, some might be add-on systems, and some might be mobile device applications and programs. Some might provide data via satellite or cellular connections, transmitting on a regular basis or when a vehicle passes a fixed-data download station. (Fixed download stations pose mostly upfront, fixed costs, whereas the cost for a satellite connection is typically levied in ongoing monthly data charges, the GAO noted.) On top of all that, fleets may rent telematic devices for a short period of time to obtain a snapshot of usage data, or may select a long-term contract for ongoing monitoring.


That brings us to the contracting community. The General Services Administration is currently engaged in efforts to secure new contracts for telematics devices for federal employees, and hopes to have them available by the end of fiscal year 2014, according to the GAO. None of the specific vendors bidding for the contracts were noted. As part of that effort, though, the agency is hoping to leverage government’s buying power, as it's been doing a lot lately through its strategic sourcing efforts. Whether there's enough demand is tough to know, given that agencies have only recently begun to pursue the technology in significant quantity.

For Tesla, Another Year Ahead Of Ramping Up Model S Production

As reported by GigaOm:While Tesla needs to look to the future to its next car the Model X, and its plans for a massive battery factory, the electric car company is still solidly concentrating on ramping up manufacturing of its current Model S electric car over the course of this year. In an earnings statement released on Wednesday Tesla said that it made 7,535 Model S cars — a record — for the first quarter of 2014, and plans to make between 8,500 and 9,000 Model S cars in the following quarter.  

In total Tesla wants to deliver 35,000 Model S cars in 2014, with an eventual production rate of 1,000 vehicles per week (it’s currently at 700 per week). While the ramp up in manufacturing might sound like a relatively minor move, the amount of lithium ion battery cells available to Tesla is actually one of the more substantial dampeners on that growth. Tesla says in its shareholder letter that “battery cell supply will constrain development in Q2 but improve in Q3.”
Tesla is looking to ramp up shipments of Model S cars to China and Europe significantly this year. Tesla says China could be one of its largest markets within a few years. Musk said on the earnings call that he was “blown away” by the interest and enthusiasm in Tesla in China, and in three to four years Tesla could manufacture cars in China for the Chinese market. Musk also said they are considering manufacturing locally in Europe, too, for European customers.

Beyond the Model S, Tesla is working on the production design  prototypes for its Model X crossover electric vehicle, and the company says those will be available in Q4 of this year. Tesla also started producing powertrains for the Mercedes B-class this quarter.

For the factory, Musk announced on the call that Tesla has a signed letter of intent with Panasonic as a partner on its battery factory. Tesla CTO JB Straubel said in the call that Tesla and Panasonic have a joint working team focused on exploring mutual topics, answering questions, and making progress on closing the deal on the battery factory. Straubel said that Tesla is heading toward a final agreement with Panasonic later this year.

Tesla could break ground on one of the potential sites for the battery factory as soon as next month. The company will break ground on another potential site shortly after that. Tesla will move the process along on two sites until it chooses one. Musk said on the call that California might be back in the running for the battery factory, but that the time to break ground on the factory in California could take too long due to regulations.

Even though Tesla beat its sales and profit estimates, the company’s shares dropped on the earnings news. Tesla’s stock fell 5.75 percent in after hours trading.

Here were the financial stats for the quarter:
  • Revenue: $621 million (up from $562 million in Q1 2013)
  • Net loss: $50 million or a $0.40 loss per share (Tesla made a profit of $11.25 million in Q1 2013)
  • GAAP gross margin: 25.3 percent
  • Tesla set aside a unplanned $2 million reserve for the under body plates it released earlier this year.
  • Tesla spent $82 million in R&D expenses for the first quarter.

Google Maps Supports Offline Mobile Mapping

As reported by GigaOm: Google updated its Google Maps apps for both iOS and Android on Tuesday, and with it comes a feature some users have been wanting for a long time — a clear, simple button to save maps offline.

To find the “save map to use offline” button, simply look up a location on either iOS or Android. Scroll to the bottom of the place info sheet; if you’re looking up a restaurant, for instance, the button will be buried underneath user reviews, the user review summary, and other options. While the ability to save maps for times when you won’t have a signal would seem to be a basic feature, the iPhone didn’t get it until last July, and even then users needed to know the secret password-style command “OK maps.”


Aside from offline maps, this update seems to be a big one. On the iPhone, it’s landmark version number 3.0.0, and it comes with a number of new features.

Google Maps now can tell you the best lane to get in on the highway, so you don’t miss an exit. Google has added several filters to its business directory search, including the long-awaited “open now.” There’s also a little bit of Uber integration, and transit directions will now tell you when the last train is coming. Handy!

The update is already available for iOS and will be rolling out to Android phones over the next few days.

Wednesday, May 7, 2014

The First American Space Lifeboat In 40 Years Is Coming To The ISS

As reported by Electronic Products: NASA’s next generation of American spacecraft will be designed to carry people into low-Earth orbit and also function as a lifeboat for the International Space Station (ISS) for up to seven months. This service hasn't been provided by an American spacecraft since an Apollo command module remained docked to Skylab for three months from 1973 to ’74.

Similar to a lifeboat on a cruise ship, the spacecraft isn't expected to be called into service to quickly evacuate people but it certainly has to be ready to do so just in case.

Currently the lifeboat function of the space station is served by a pair of Russian Soyuz spacecraft, which are docked at all times. Each Soyuz can hold three people, so with two of them docked, there can be six people working on the station at one time.

According to NASA engineers working with companies developing spacecraft in the agency’s Commercial Crew Program (CCP), in order for a spacecraft to be considered a lifeboat, it must provide a shelter for astronauts in case of an issue aboard the station. The ship must also be able to quickly get all of its systems operating and detach from the station for a potential return to Earth.

When it comes to the lifeboat feature, two obstacles that make it difficult for spacecraft designers are power and protection from things outside the spacecraft, such as micrometeoroids. The electricity generated by the space station’s acre of solar arrays is reserved for the station’s systems and science experiments, and the amount of power available for a docked spacecraft isn't much — it's similar to the amount of electricity a refrigerator uses.

Designers also have the challenge of building a spacecraft strong enough to withstand impacts from micrometeoroids, but at the same time they cannot carry armor that’s too heavy to launch.

CCP gave aerospace companies a list of requirements their spacecraft need to meet during NASA’s certification process for use as in-orbit lifeboats. Boeing, Sierra Nevada Corporation and SpaceX are working in partnership with NASA on spacecraft designs that meet these requirements.

GPS/GNSS Backup eDLoran System Delivers 5-Meter Accuracy

As reported by GPS World: Durk van Willigen, René Kellenbach, and Cees Dekker of the Dutch consulting firm Reelektronika, and Wim van Buuren of the Dutch Pilots’ Corporation authored the ENC presentation about enhanced differential Loran (eDLoran), with results that greatly — and pleasantly — surprised many in the audience. A full technical article by these authors, describing the equipment, methodology, and test results of eDLoran, will appear in the July issue of GPS World

The new Loran project arose from the need of harbor pilots responsible for bringing large and super-large freight ships into dock. These pilots require GNSS-level acuracies of 5 meters for such work, and all parties concerned — pilots, captains, ship owners, harbor management — need some form of robustness, that is, back-up for the GNSS systems in case of jamming, unintentional interference, system failure, or other disruption.

As extensive research had established that 5-meter accuracy cannot be met by the currently tested DLoran system, which cannot get better than 10-meter accuracy. Reelektronika developed a new differential Loran system called enhanced differential Loran, or eDLoran. A full prototype eDLoran system was built and extensively tested in the Europort (Rotterdam) area. The tests achieved accuracies of 5 meters.

For maritime applications, eLoran is considered as the most promising backup for GNSS in case the use of satellite-based navigation signals is denied. The Dutch Pilots’ Corporation askedReelektronika to investigate whether differential Loran could meet the pilots’ 5-meter accuracy requirement for a harbor navigation. This proved to be an enormous challenge as preliminary tests showed that even 10 meters was difficult to achieve with differential Loran (DLoran) as promoted by the UK’s Trinity House/General Lighthouse Authority (see item below about Harwich UK tests by GLA and ACCESS). The challenge had led to a thorough investigation of all possible error sources of a complete differential Loran system.

Differential techniques developed and implemented for Loran are comparable with differential GPS. Although the error sources of GPS and Loran are quite different, the major common error source in both systems is the lack of accurate propagation models.

This led to a new research project to find a more accurate differential Loran technique. All possible error sources have been investigated again where possible, which resulted in some unexpected results regarding accuracies and costs.

Enhanced Differential Loran: eDLoran
The new concept of differential Loran had to fulfill two important primary improvements. The first is a significant reduction in the latency of the data in the data channel; the second is that a large number of reference stations should be capable of receiving the data channel, without saturating the data channel. The simple conclusion was that Eurofix could not meet these two improvements. However, Eurofix is still the prime GNSS backup candidate for distributing accurate UTC over very large parts of Europe. Further, Eurofix has the capability to send short messages that might be encrypted for secure communication purposes which might then form a terrestrial backup, for example, Galileo PRS.

Instead of using the Eurofix channel, eDLoran uses the public mobile GSM (Global System for Mobile) network to send the differential corrections to users. eDLoran receivers therefore contain a simple modem for connection to the GSM network. The eDLoran reference stations are also connected to the Internet which may be implemented via a cabled access or also via a GSM modem.Fortunately, today many GSM networks are robust in respect of GPS outages.

The eDLoran infrastructure is not connected with any eLoran transmitter station and operates completely autonomously. An eDLoran reference station is connected to a central eDLoran server by its connection to the network.

eDLoran Results
Both static and dynamic tests have been carried out. Here, only the final result of the dynamic test is presented. For full details on both sets of tests, see the upcoming full-length technical article in the July issue of GPS World magazine.

The results have been demonstrated to the harbor authorities in real-time on the laptop of the pilots on which the GPS-RTK and the eDLoran position were simultaneously shown. The logged GPS-RTK data is plotted on a Google Earth map shown in the accompanying figure. The track was widened to 10 meters as the accuracy requirements are 5 meters on either side of the track. The raw eLoran track is also shown, as well as the final white eDLoran track.
The red track is based on raw eLoran data without any corrections. The transparent blue line is made by GPS-RTK and is widened to 10 metres giving the required ± 5 metre limits of eDLoran. The white line is output from the eDLoran receiver which stays within the borders of the 10-meter-wide transparent blue line.
The red track is based on raw eLoran data without any corrections. The transparent blue line is made by GPS-RTK and is widened to 10 metres giving the required ± 5 meter limits of eDLoran. The white line is output from the eDLoran receiver which stays within the borders of the 10-meter-wide transparent blue line.
Conclusions
The outcome of the research opens some new and quite surprising possibilities for multiple applications. Only a few of the authors’ conclusions appear here:
  1. eDLoran offers the best possible eLoran accuracy as it does not suffer from swaying wire antennas, sub-optimal timing control of the transmitter station and differential data latency.
  2. There is no need to replace older Loran-C stations with eLoran transmitters saving large amounts of money. The existing Loran stations have a proven reliability track record. Further savings may be obtained by containerising the transmitter and operating the stations unmanned.
  3. Installing eDLoran reference stations is fast, simple and very cost effective.
  4. As there is no data channel bandwidth limitation, multiple reference stations can be installed which offers increased reliability and makes the system more robust against terrorism and lightning damage.
  5. A single or multiple eDLoran servers can be installed in a protected area. There is hardly a practical limit in the number of differential reference stations to serve.

The Self-Driving Car Of Tomorrow May Be Programmed To Hit You

As reported by Wired: Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?

As a matter of physics, you should choose a collision with a heavier vehicle that can better absorb the impact of a crash, which means programming the car to crash into the Volvo. Further, it makes sense to choose a collision with a vehicle that’s known for passenger safety, which again means crashing into the Volvo.

But physics isn't the only thing that matters here. Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?

What seemed to be a sensible programming design, then, runs into ethical challenges. Volvo and other SUV owners may have a legitimate grievance against the manufacturer of robot cars that favor crashing into them over smaller cars, even if physics tells us this is for the best.

Is This a Realistic Problem?
Some road accidents are unavoidable, and even autonomous cars can’t escape that fate. A deer might dart out in front of you, or the car in the next lane might suddenly swerve into you. Short of defying physics, a crash is imminent. An autonomous or robot car, though, could make things better.

While human drivers can only react instinctively in a sudden emergency, a robot car is driven by software, constantly scanning its environment with unblinking sensors and able to perform many calculations before we’re even aware of danger. They can make split-second choices to optimize crashes–that is, to minimize harm. But software needs to be programmed, and it is unclear how to do that for the hard cases.

In constructing the edge cases here, we are not trying to simulate actual conditions in the real world. These scenarios would be very rare, if realistic at all, but nonetheless they illuminate hidden or latent problems in normal cases. From the above scenario, we can see that crash-avoidance algorithms can be biased in troubling ways, and this is also at least a background concern any time we make a value judgment that one thing is better to sacrifice than another thing.

In previous years, robot cars have been quarantined largely to highway or freeway environments. This is a relatively simple environment, in that drivers don’t need to worry so much about pedestrians and the countless surprises in city driving. But Google recently announced that it has taken the next step in testing its automated car in exactly city streets. As their operating environment becomes more dynamic and dangerous, robot cars will confront harder choices, be it running into objects or even people.

Ethics Is About More Than Harm
The problem is starkly highlighted by the next scenario, also discussed by Noah Goodall, a research scientist at the Virginia Center for Transportation Innovation and Research. Again, imagine that an autonomous car is facing an imminent crash. It could select one of two targets to swerve into: either a motorcyclist who is wearing a helmet, or a motorcyclist who is not. What’s the right way to program the car?

In the name of crash-optimization, you should program the car to crash into whatever can best survive the collision. In the last scenario, that meant smashing into the Volvo SUV. Here, it means striking the motorcyclist who’s wearing a helmet. A good algorithm would account for the much-higher statistical odds that the biker without a helmet would die, and surely killing someone is one of the worst things auto manufacturers desperately want to avoid.

But we can quickly see the injustice of this choice, as reasonable as it may be from a crash-optimization standpoint. By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet. Meanwhile, we are giving the other motorcyclist a free pass, even though that person is much less responsible for not wearing a helmet, which is illegal in most U.S. states.

Not only does this discrimination seem unethical, but it could also be bad policy. That crash-optimization design may encourage some motorcyclists to not wear helmets, in order to not stand out as favored targets of autonomous cars, especially if those cars become more prevalent on the road. Likewise, in the previous scenario, sales of automotive brands known for safety may suffer, such as Volvo and Mercedes Benz, if customers want to avoid being the robot car’s target of choice.

The Role of Moral Luck
An elegant solution to these vexing dilemmas is to simply not make a deliberate choice. We could design an autonomous car to make certain decisions through a random-number generator. That is, if it’s ethically problematic to choose which one of two things to crash into–a large SUV versus a compact car, or a motorcyclist with a helmet versus one without, and so on–then why make a calculated choice at all?

A robot car’s programming could generate a random number; and if it is an odd number, the car will take one path, and if it is an even number, the car will take the other path. This avoids the possible charge that the car’s programming is discriminatory against large SUVs, responsible motorcyclists, or anything else.
This randomness also doesn’t seem to introduce anything new into our world: luck is all around us, both good and bad. A random decision also better mimics human driving, insofar as split-second emergency reactions can be unpredictable and are not based on reason, since there’s usually not enough time to apply much human reason.

Yet, the random-number engine may be inadequate for at least a few reasons. First, it is not obviously a benefit to mimic human driving, since a key reason for creating autonomous cars in the first place is that they should be able to make better decisions than we do. Human error, distracted driving, drunk driving, and so on are responsible for 90 percent or more of car accidents today, and 32,000-plus people die on U.S. roads every year.

Second, while human drivers may be forgiven for making a poor split-second reaction–for instance, crashing into a Pinto that’s prone to explode, instead of a more stable object–robot cars won’t enjoy that freedom. Programmers have all the time in the world to get it right. It’s the difference between premeditated murder and involuntary manslaughter.

Third, for the foreseeable future, what’s important isn’t just about arriving at the “right” answers to difficult ethical dilemmas, as nice as that would be. But it’s also about being thoughtful about your decisions and able to defend them–it’s about showing your moral math.  In ethics, the process of thinking through a problem is as important as the result.  Making decisions randomly, then, evades that responsibility. Instead of thoughtful decisions, they are thoughtless, and this may be worse than reflexive human judgments that lead to bad outcomes.


Can We Know Too Much?
A less drastic solution would be to hide certain information that might enable inappropriate discrimination–a “veil of ignorance”, so to speak. As it applies to the above scenarios, this could mean not ascertaining the make or model of other vehicles, or the presence of helmets and other safety equipment, even if technology could let us, such as vehicle-to-vehicle communications. If we did that, there would be no basis for bias.

Not using that information in crash-optimization calculations may not be enough. To be in the ethical clear, autonomous cars may need to not collect that information at all. Should they be in possession of the information, and using it could have minimized harm or saved a life, there could be legal liability in failing to use that information. Imagine a similar public outrage if a national intelligence agency had credible information about a terrorist plot but failed to use it to prevent the attack.

A problem with this approach, however, is that auto manufacturers and insurers will want to collect as much data as technically possible, to better understand robot-car crashes and for other purposes, such as novel forms of in-car advertising. So it’s unclear whether voluntarily turning a blind eye to key information is realistic, given the strong temptation to gather as much data as technology will allow.

So, Now What?
In future autonomous cars, crash-avoidance features alone won’t be enough. Sometimes an accident will be unavoidable as a matter of physics, for myriad reasons–such as insufficient time to press the brakes, technology errors, misaligned sensors, bad weather, and just pure bad luck. Therefore, robot cars will also need to have crash-optimization strategies.

To optimize crashes, programmers would need to design cost-functions–algorithms that assign and calculate the expected costs of various possible options, selecting the one with the lowest cost–that potentially determine who gets to live and who gets to die. And this is fundamentally an ethics problem, one that demands care and transparency in reasoning.

It doesn't matter much that these are rare scenarios. Often, the rare scenarios are the most important ones, making for breathless headlines. In the U.S., a traffic fatality occurs about once every 100 million vehicle-miles traveled. That means you could drive for more than 100 lifetimes and never be involved in a fatal crash. Yet these rare events are exactly what we’re trying to avoid by developing autonomous cars, as Chris Gerdes at Stanford’s School of Engineering reminds us.

Again, the above scenarios are not meant to simulate real-world conditions anyway, but they’re thought-experiments–something like scientific experiments–meant to simplify the issues in order to isolate and study certain variables. In those cases, the variable is the role of ethics, specifically discrimination and justice, in crash-optimization strategies more broadly.

The larger challenge, though, isn't thinking through ethical dilemmas. It’s also about setting accurate expectations with users and the general public who might find themselves surprised in bad ways by autonomous cars. Whatever answer to an ethical dilemma the car industry might lean towards will not be satisfying to everyone.


Ethics and expectations are challenges common to all automotive manufacturers and tier-one suppliers who want to play in this emerging field, not just particular companies. As the first step toward solving these challenges, creating an open discussion about ethics and autonomous cars can help raise public and industry awareness of the issues, defusing outrage (and therefore large lawsuits) when bad luck or fate crashes into us.