Search This Blog

Friday, June 10, 2016

Tesla Knows When a Crash is Your Fault

As reported by Washington PostEvery day, our cars are becoming smarter and more connected. This may someday save your life in a crash, or prevent one altogether — but it also makes it far harder to evade blame when you're the cause of a fender-bender.

One Tesla owner appears to be finding that out firsthand as he struggles to convince the luxury automaker his wife wasn't the one who crashed his Model X. Instead, he complains, the car suddenly accelerated all by itself, jumped the curb and rammed straight into the side of a shopping center.

Tesla is disputing the owner's account of the incident, citing detailed diagnostic logs that show the car's gas pedal suddenly being pressed to the floor in the moments before the collision.
"Consistent with the driver's actions, the vehicle applied torque and accelerated as instructed," Tesla said in a press statement.
At no time did the driver have Tesla's autopilot or cruise control engaged, according to Tesla, which means the car was under manual control — it couldn't have been anyone else but the human who caused the crash. The car uses multiple sensors to double check a driver's accelerator commands.
The Model X owner appears to be standing by his story, but here's the broader takeaway. Cars have reached a level of sophistication in which they can tattle on their own owners, simply by handing over the secrets embedded in the data they already collect about your driving.
Your driving data is extremely powerful: It can tell your mechanic exactly what parts need work. It offers hints about your commute and your lifestyle. And it can help keep you safe, when combined with features such as automatic lane-keeping and crash avoidance systems.
But the potential dark side is that the data can be abused. Maybe a rogue insurance company might look at it and try to raise your premiums. Perhaps it gives automakers an incentive to claim that you, the owner, were at fault for a crash even if you think you weren't. To be clear, that isn't necessarily what's going on with Tesla's Model X owner. But the case offers a window into the kind of issues that drivers will increasingly face as their vehicles become smarter.

Thursday, June 9, 2016

This Deep Space Atomic Clock Is Key for Future Exploration

As reported by TimeWe all intuitively understand the basics of time. Every day we count its passage and use it to schedule our lives.
We also use time to navigate our way to the destinations that matter to us. In school we learned that speed and time will tell us how far we went in traveling from point A to point B; with a map we can pick the most efficient route – simple.
But what if point A is the Earth, and point B is Mars – is it still that simple? Conceptually, yes. But to actually do it we need better tools – much better tools.
At NASA’s Jet Propulsion Laboratory, I’m working to develop one of these tools: the Deep Space Atomic Clock, or DSAC for short. DSAC is a small atomic clock that could be used as part of a spacecraft navigation system. It will improve accuracy and enable new modes of navigation, such as unattended or autonomous.
In its final form, the Deep Space Atomic Clock will be suitable for operations in the solar system well beyond Earth orbit. Our goal is to develop an advanced prototype of DSAC and operate it in space for one year, demonstrating its use for future deep space exploration.
Speed and time tell us distanceTo navigate in deep space, we measure the transit time of a radio signal traveling back and forth between a spacecraft and one of our transmitting antennae on Earth (usually one of NASA’s Deep Space Network complexes located in Goldstone, California; Madrid, Spain; or Canberra, Australia).
We know the signal is traveling at the speed of light, a constant at approximately 300,000 km/sec (186,000 miles/sec). Then, from how long our “two-way” measurement takes to go there and back, we can compute distances and relative speeds for the spacecraft.
For instance, an orbiting satellite at Mars is an average of 250 million kilometers from Earth. The time the radio signal takes to travel there and back (called its two-way light time) is about 28 minutes. We can measure the travel time of the signal and then relate it to the total distance traversed between the Earth tracking antenna and the orbiter to better than a meter, and the orbiter’s relative speed with respect to the antenna to within 0.1 mm/sec.
We collect the distance and relative speed data over time, and when we have a sufficient amount (for a Mars orbiter this is typically two days) we can determine the satellite’s trajectory.
Measuring time, way beyond Swiss precisionFundamental to these precise measurements are atomic clocks. By measuring very stable and precise frequencies of light emitted by certain atoms (examples include hydrogen, cesium, rubidium and, for DSAC, mercury), an atomic clock can regulate the time kept by a more traditional mechanical (quartz crystal) clock. It’s like a tuning fork for timekeeping. The result is a clock system that can be ultra stable over decades.
The precision of the Deep Space Atomic Clock relies on an inherent property of mercury ions – they transition between neighboring energy levels at a frequency of exactly 40.5073479968 GHz. DSAC uses this property to measure the error in a quartz clock’s “tick rate,” and, with this measurement, “steers” it towards a stable rate. DSAC’s resulting stability is on par with ground-based atomic clocks, gaining or losing less than a microsecond per decade.
Continuing with the Mars orbiter example, ground-based atomic clocks at the Deep Space Network error contribution to the orbiter’s two-way light time measurement is on the order of picoseconds, contributing only fractions of a meter to the overall distance error. Likewise, the clocks’ contribution to error in the orbiter’s speed measurement is a minuscule fraction of the overall error (1 micrometer/sec out of the 0.1 mm/sec total).
The distance and speed measurements are collected by the ground stations and sent to teams of navigators who process the data using sophisticated computer models of spacecraft motion. They compute a best-fit trajectory that, for a Mars orbiter, is typically accurate to within 10 meters (about the length of a school bus).
The ground clocks used for these measurements are the size of a refrigerator and operate in carefully controlled environments – definitely not suitable for spaceflight. In comparison, DSAC, even in its current prototype form as seen above, is about the size of a four-slice toaster. By design, it’s able to operate well in the dynamic environment aboard a deep-space exploring craft.
One key to reducing DSAC’s overall size was miniaturizing the mercury ion trap. Shown in the prior figure, it’s about 15 cm (6 inches) in length. The trap confines the plasma of mercury ions using electric fields. Then, by applying magnetic fields and external shielding, we provide a stable environment where the ions are minimally affected by temperature or magnetic variations. This stable environment enables measuring the ions’ transition between energy states very accurately.
The DSAC technology doesn’t really consume anything other than power. All these features together mean we can develop a clock that’s suitable for very long duration space missions.
Because DSAC is as stable as its ground counterparts, spacecraft carrying DSAC would not need to turn signals around to get two-way tracking. Instead, the spacecraft could send the tracking signal to the Earth station or it could receive the signal sent by the Earth station and make the tracking measurement on board. In other words, traditional two-way tracking can be replaced with one-way, measured either on the ground or on board the spacecraft.
So what does this mean for deep space navigation? Broadly speaking, one-way tracking is more flexible, scalable (since it could support more missions without building new antennas) and enables new ways to navigate.
DSAC advances us beyond what’s possible todayThe Deep Space Atomic Clock has the potential to solve a bunch of our current space navigation challenges.
  • Places like Mars are “crowded” with many spacecraft: Right now, there are five orbiters competing for radio tracking. Two-way tracking requires spacecraft to “time-share” the resource. But with one-way tracking, the Deep Space Network could support many spacecraft simultaneously without expanding the network. All that’s needed are capable spacecraft radios coupled with DSAC.
  • With the existing Deep Space Network, one-way tracking can be conducted at a higher-frequency band than current two-way. Doing so improves the precision of the tracking data by upwards of 10 times, producing range rate measurements with only 0.01 mm/sec error.
  • One-way uplink transmissions from the Deep Space Network are very high-powered. They can be received by smaller spacecraft antennas with greater fields of view than the typical high-gain, focused antennas used today for two-way tracking. This change allows the mission to conduct science and exploration activities without interruption while still collecting high-precision data for navigation and science. As an example, use of one-way data with DSAC to determine the gravity field of Europa, an icy moon of Jupiter, can be achieved in a third of the time it would take using traditional two-way methods with the flyby mission currently under development by NASA.
  • Collecting high-precision one-way data on board a spacecraft means the data are available for real-time navigation. Unlike two-way tracking, there is no delay with ground-based data collection and processing. This type of navigation could be crucial for robotic exploration; it would improve accuracy and reliability during critical events – for example, when a spacecraft inserts into orbit around a planet. It’s also important for human exploration, when astronauts will need accurate real-time trajectory information to safely navigate to distant solar system destinations.
Countdown to DSAC launchThe DSAC mission is a hosted payload on the Surrey Satellite Technology Orbital Test Bed spacecraft. Together with the DSAC Demonstration Unit, an ultra stable quartz oscillator and a GPS receiver with antenna will enter low altitude Earth orbit once launched via a SpaceX Falcon Heavy rocket in early 2017.
While it’s on orbit, DSAC’s space-based performance will be measured in a yearlong demonstration, during which Global Positioning System tracking data will be used to determine precise estimates of OTB’s orbit and DSAC’s stability. We’ll also be running a carefully designed experiment to confirm DSAC-based orbit estimates are as accurate or better than those determined from traditional two-way data. This is how we’ll validate DSAC’s utility for deep space one-way radio navigation.
In the late 1700s, navigating the high seas was forever changed by John Harrison’s development of the H4 “sea watch.” H4’s stability enabled seafarers to accurately and reliably determine longitude, which until then had eluded mariners for thousands of years. Today, exploring deep space requires traveling distances that are orders of magnitude greater than the lengths of oceans, and demands tools with ever more precision for safe navigation. DSAC is at the ready to respond to this challenge.The Conversation

Wednesday, June 8, 2016

SpaceX Plans to Relaunch a Used Rocket for the First Time this Fall

As reported by The VergeSpaceX CEO Elon Musk shared a picture of all the rockets the company has landed so far, noting that one of them will re-fly for the first time in September or October. When that happens, SpaceX will finally be able to boast that it has reused one of its Falcon 9 vehicles.
Those target dates are a little later than what Musk had originally suggested, however. After SpaceX's first drone ship landing in April, the CEO said the Falcon 9 rocket could fly again on an orbital mission as early as May or June. It was an ambitious turnaround time for the company, especially since SpaceX is just now figuring out how to put its reusable rocket strategy into practice. Eventually, SpaceX hopes to land and re-fly its rockets within just a few weeks.


Fourth rocket arrives in the hangar. Aiming for first reflight in Sept/Oct.
There's still no word on what the first reused Falcon 9 will do. SpaceX said recently that a number of customers are interested in having their cargo fly on the landed vehicle, according to Space News. In February, a top official from international satellite operator SES said the company was particularly eager to have one of its probes sent to space on a previously landed Falcon 9, according to Spaceflight Now.

Drone Swarms Will Soon Fly Alongside Fighter Jets


As reported by Wireless Design Mag: Right now, the military’s largest unmanned aerial vehicles (UAVs), such as the big, bad Predator and Reaper, are controlled via ground control stations. But according to the U.S. Air Force (USAF) Chief Scientist, groups of drones may soon be fully operated from the cockpits of advanced fighter jets flying nearby.

This technological advancement would enhance mission scope and effectiveness, enabling F-35 pilots to perform sensing, reconnaissance, and targeting functions with more weapons, sensors, and cargo at their immediate disposal.

“The more autonomy and intelligence you can put on these vehicles, the more useful they will become,” said USAF Chief Scientist Greg Zacharias.

For example, Predator, Reaper, or Global Hawk aircraft could send real-time video feeds to an F-35 cockpit without having to first transmit the information to a ground control station, speeding up the process in fast-moving combat situations where a fighter pilot may need to attack. In addition, drones could be programmed to fly into high-risk areas ahead of manned fighter jets in order to assess an enemy’s aerial defenses—and reducing threats to the pilots in the process.

Together, these advancements are what Zacharias refers to as “decision aide support,” meaning machines (in this instance, the drones) will be able to better interpret and communicate information without human-beings having to manage each individual task. Right now, multiple humans are required to control a single drone, but future algorithms may enable one human to control 10 (or even 100) unmanned aircraft.

Algorithms may one day even advance to the point where a Predator or Reaper could follow a fighter jet without needing personnel to first input the flight path.


“Decision aides will be in cockpit or on the ground and more platform oriented autonomous systems,” Zacharias said. “A wing-man, for instance, might be carrying extra weapons, conduct ISR tasks or help to defend an area.”

Scientists, by way of wargames and computer simulations, are already working on advancing drone autonomy to the point where aircraft can trick an enemy’s radar system, as well as locate and identity targets more quickly and accurately.

“We will get beyond simple guidance and control and will get into tactics and execution,” Zacharias added.

Of course, scientists disagree on whether or not machines can (or should) be programmed to instantly respond to emerging objects or circumstances—threatening or not. Nonetheless, fighter jets (and their human pilots) will still benefit from greater interconnection with drones in order to make better, faster, and safer tactical decisions during missions.



Tuesday, June 7, 2016

FAA Warns of GPS Outages This Month During Mysterious Tests on the West Coast

As reported by GizmodoStarting today, it appears the US military will be testing a device or devices that will potentially jam GPS signals for six hours each day. We say “appears” because officially the tests were announced by the FAA but are centered near the US Navy’s largest installation in the Mojave Desert. And the Navy won’t tell us much about what’s going on.

The FAA issued an advisory warning pilots on Saturday that global positioning systems (GPS) could be unreliable during six different days this month, primarily in the Southwestern United States. On June 7, 9, 21, 23, 28, and 30th the GPS interference testing will be taking place between 9:30am and 3:30pm Pacific time. But if you’re on the ground, you probably won’t notice interference.
The testing will be centered on China Lake, California—home to the Navy’s 1.1 million acre Naval Air Weapons Center in the Mojave Desert. The potentially lost signals will stretch hundreds of miles in each direction and will affect various types of GPS, reaching the furthest at higher altitudes. But the jamming will only affect aircraft above 50 feet. As you can see from the FAA map below, the jamming will almost reach the California-Oregon border at 4o,000 feet above sea level and 505 nautical miles at its greatest range.

I gave the Naval Air Warfare Center Weapons Division a call yesterday, but they couldn’t tell me much.
“We’re aware of the flight advisory,” Deidre Patin, Public Affairs specialist for Naval Air Warfare Center Weapons Division told me over the phone. But she couldn’t give me any details about whether there was indeed GPS “jamming,” nor whether it had happened before. Patin added, “I can’t go into the details of the testing, it’s general testing for our ranges.”
As AVWeb points out, Embraer Phenom 300 business jets are being told to avoid the area completely during the tests. The FAA claims that the jamming test could interfere with the business jet’s “aircraft flight stability controls.”
GPS technology has become so ubiquitous that cheap jamming technology has become a real concern for both military and civilian aircraft. And if we had to speculate we’d say that these tests are probably pulling double duty for both offensive and defensive military capabilities. But honestly, that’s just a guess.
These tests are naturally going to fuel plenty of conspiracy theories about mind control, weather modification, and aliens—especially with China Lake’s proximity to both large population centers like LA and Las Vegas, and the fact that Area 51 is practically just down the road. But it doesn’t take a conspiracy theorist to tell us we’re fucked if terrorists or shitty teenagers make it a habit of jamming GPS signals for everybody.
If you experience any significant GPS interference this month or know the “real” reason behind these test (aliens, right?) please let us know in the comments.

Wednesday, May 25, 2016

Is This Gliding Electric Bus the Future of Public Transportation?

As reported by Popular Mechanics:If you've ever been stuck on the bus wondering how and when the humble vehicle will make the jump into the future, a fresh concept video could ease your transportation-related concerns.

The mass-transit concept, created by Beijing-based Transit Explore Bus, was shown off at the 19th China Beijing International High-Tech Expo (CHITEC) over the weekend. As the video below shows, the electric transit elevated bus glides above traffic and is designed to allow cars to pass beneath it.



The evolutionary Monorail concept is also apparently cheaper and quicker to develop than subway systems, and can hold up to 1,400 passengers. With Hebei's Qinhuangdao City set to adopt the gliding apparatus in the second half of this year, we might just see the Straddle Bus cruising over cars in no time.

Tesla Tests Self-Driving Functions with Secret Updates to Its Customers’ Cars

As reported by MIT Technology ReviewWhen Tesla Motors introduced the Model S sedan in 2012, one of its many notable features was an always-on cellular based Internet connection. A Tesla executive explained today that it has turned into a powerful advantage in the company’s contest with other carmakers and Internet giants such as Google to get self-driving cars onto public roads.

Tesla can pull down data from the sensors inside its customers’ vehicles to see how people are driving and the road and traffic conditions they experience. It uses that data to test the effectiveness of new self-driving features. The company even secretly tests new autonomous software by remotely installing it on customer vehicles so it can react to real road and traffic conditions, without controlling the vehicle.

“The ability to pull high-resolution data from these vehicles and to update the vehicles over the air is a significant part of what’s allowed us in 18 months to go from very behind the curve to what is today one of the more advanced autonomous or semi-autonomous driving features,” said Sterling Anderson, director of Tesla’s Autopilot program, at MIT Technology Review’s EmTech Digital conference in San Francisco on Tuesday (see “No Industry Can Afford to Ignore Artificial Intelligence”).

Tesla began bundling a suite of new sensors into its vehicles in 2014, saying it was for a new emergency braking feature.

But the 12 ultrasonic sensors positioned around the car sense nearby objects, and the forward-facing cameras and radar units were intended for bigger things. Tesla engineers began using data streaming from cars with those sensors and information on their locations to start testing autonomous driving features.

“Since introducing this hardware 18 months ago we’ve accrued 780 million miles,” said Anderson. “We can use all of that data on our servers to look for how people are using our cars and how we can improve things.” Every 10 hours Tesla gets another million miles worth of data, he said.

Tesla’s engineers initially test new self-driving software against those records. Any that perform well can also be tested by secretly installing them onto customer vehicles and watching how they respond to conditions on the road, although the software doesn't actually control the car.
“We will often install an ‘inert’ feature on all our vehicles worldwide,” said Anderson. “That allows us to watch over tens of millions of miles how a feature performs.”

Anderson’s team can also watch closely when a new feature is activated. For example, he showed a chart illustrating how self-driving Teslas using the Autopilot feature hold themselves much more tightly to the center of the lane than humans do when steering the car. Since its launch last October, Tesla has logged 100 million miles of vehicles steering themselves (see “10 Breakthrough Technologies 2016: Tesla Autopilot”).

Tesla’s ability to pull data from its cars and even covertly test autonomous driving software is likely unique. Google has demonstrated some of the most advanced self-driving technology, but it can only pull data from its fleet of prototypes, likely smaller and less widely distributed than the collection of Tesla vehicles on the road.

Other carmakers, such as GM, are also working on self-driving. But they have not embraced the idea of Internet connectivity and over-the-air updates in the way Tesla has.

However, Tesla’s strategy of using its data infrastructure to test and develop its technology in public could run into problems. Google restructured its autonomous car program in 2014 after the concerning results of an experiment in which Google employees could use self-driving prototypes. People quickly became complacent about the technology’s abilities, despite the fact that they were supposed to be ready to take over at all times.

“One guy noticed that his cell-phone battery was low, pulled out his laptop, and plugged it in at 65 miles per hour on the freeway,” Chris Urmson, who leads Google’s project, said at the EmTech event today. “We thought, this is not good.” Google committed itself to car designs without steering wheels or pedals, piloted by software alone (see “Lazy Humans Shaped Google’s New Autonomous Car”).

Anderson takes a different view. He said Tesla’s data-centric strategy will allow the company to keep advancing the company’s Autopilot technology, for example to include the ability to drive in more urban conditions and handle intersections. Tesla must be aware of drivers’ expectations, but doesn’t need to take them out of the equation altogether, he said.

“Autopilot is not an autonomous system and should not be treated as one,” said Anderson. “We ask drivers to keep their hands on [the wheel] and be prepared to take over.”