Search This Blog

Wednesday, July 1, 2015

Why SpaceX will Sort Out Sunday's Accident Faster than NASA Ever Could

As reported by The RegisterAnalysis The perfect delivery record of SpaceX's Falcon 9 rocket ended on Sunday 139 seconds into flight, but there are good reasons why Elon's Musketeers will be back on schedule far faster than many are predicting.

The accident itself was a disaster, and a very unfortunate birthday present for Elon Musk, who turned 44 that day. It had been hoped he'd celebrate by seeing the first-ever landing of a Falcon rocket on SpaceX's landing barge Of Course I Still Love You. Instead he got fireworks of a different persuasion.

Here's what we know so far about the accident. The rocket completed static testing and was raised into position loaded up with around 4,000lbs (1,814kg) of cargo, including a lot of equipment that was intended to replace material that was lost when Orbital Sciences' commercial resupply rocket blew up last October.

The liftoff went perfectly and the ground crew was getting ready for first stage separation as the rocket reached 44.9 kilometers of altitude and a speed of 4,733kph, when the upper half appeared to bloom and contact was lost.

At that point, the ground crew commentary went silent, which is to be expected. SpaceX has the same procedures as NASA in the event of an accident – lock the doors, backup the data, get everyone to work, and no speculation until the facts are known.

These procedures were rehearsed and set in stone even before the first Falcon's flight, and they're good scientific practice. Musk himself took to social media to provide the first details that SpaceX was positive about, tweeting that: "There was an overpressure event in the upper stage liquid oxygen tank. Data suggests counterintuitive cause."

The advantages of single-source suppliers

There are going to be a few sleepless nights at SpaceX in the coming days as engineers and designers go through the sensor data piece by piece. Musk is known for working his staff hard and this problem needs to be sorted out quickly.

And it will be, because unlike NASA, SpaceX has a huge advantage in dealing with problems. NASA rockets are put together using machinery from hodgepodge of private contractors, all with their own design and build teams – and their own internal politics, not to mention dealing with national politicians with an axe to grind.

When the Space Shuttle Challenger went boom in 1986, it took the Rogers Commission nearly six months to come to a conclusion about what went wrong – and it might have taken longer if the renowned physicist Richard Feynman hadn't been on the team to chivvy things along and occasionally point out the obvious.

All this effort culminated in the report that identified serious problems with the O-rings sealing the shuttle's external solid fuel rockets. But it also revealed that Allan McDonald, director of the Space Shuttle Solid Rocket Motor Project for the contractor Morton Thiokol, had already expressed such concern about the O-rings that he'd refused to sign the launch recommendation for Challenger's mission.

McDonald spoke out in the investigation and was removed from his job by his employers and demoted before eventually being cleared of any wrongdoing, vindicated, and reinstated. But his fate showed the problems involved in dealing with contractors.

SpaceX doesn't have those issues; it's a single company that conceived, designed, built, and flies the Falcon rockets. Finding fault is going to be a lot easier under such circumstances because there's a single data set and everyone knows everyone else.

The company is packed with highly motivated individuals and has a very flat management structure. Mistakes made are owned up to, and when the issue that caused the loss of the Falcon is identified, you can bet it will be dealt with quickly.

The current SpaceX resupply missions are on hold while this process is worked through. But you're not going to see the kind of dithering that left the Space Shuttles grounded for 32 long months. If I were a betting man I'd guess the next Falcon will fly in 32 weeks, and maybe sooner.

Getting into space is a tough business. There are few rocket systems that haven't had a failure at one time or another. While SpaceX is smarting from this first failure to deliver, the company is going to come back with a vengeance.

Megawatt Electric Race Car sets New Record at Pikes Peak International Hill Climb

As reported by ArstechnicaElectric racing cars are in vogue right now. The first Formula E championship just concluded in London (sadly the Ars-sponsored car did not win), and this side of the pond saw an electric vehicle win the prestigious Pikes Peak International Hill Climb in Colorado, setting a new record in the process. Rhys Millen took his Drive eO PP03 to the top of the mountain in 9:07.022, beating rival Nobuhiro "Monster" Tajima by more than 20 seconds.

Ride along with Rhys Millen as he becomes the fastest EV up the side of the mountain. The consequences of getting a corner wrong and going over the side don't bear thinking about.

The annual Pikes Peak International Hill Climb in Colorado is the second-oldest race in the US. It first took place in 1916, and it's a unique challenge for man and machine. Starting at Mile 7 on Pikes Peak Highway, cars race one at a time up the side of Pikes Peak, completing 156 turns in 12.4 miles (20km). It may be familiar to you from Gran Turismo 2, featuring prominently in that game, and indeed Polyphony Digital sponsored this year's race, making us wonder if the iconic event will reappear in GT7, whenever that happens to arrive.
For most of the race's long and storied history, Pikes Peak Highway was covered in gravel, but environmental concerns led to the road being paved all the way to the summit in 2011. Since then, rally cars with supple suspension, good ground clearance, and knobby tires have given way to vehicles more at home on a smooth racetrack than a forest trail.
Electric vehicles (EVs) in particular have done well since the resurfacing. From the starting line at 9,390 feet (2,862m) above sea level, the cars climb another 4,720 feet (1,440m) to the summit, causing even forced induction engines to lose power as oxygen molecules become fewer and farther between. But electric motors don't have the same altitude problem, making just as much power and torque in a vacuum as they do at sea level. Consequently, it's become a place for people to test out new EV technology.
Rhys Millen pilots his Drive eO PP03 electric car up the mountain to victory.
Millen's race car is rather interesting. The Latvian-made Drive eO PP03 uses six electric motors, three stacked in series for each axle. A 50kWh lithium-ion battery feeds those motors, giving the PP03 1,368hp (1,020kW) and 1,593lb-ft (2,160 Nm) at its disposal. Millen's time is the fastest for an EV, but still almost a minute off the outright course record, set in 2013 by nine-times World Rally Champion Sebastian Loeb and his fire-breathing Peugeot 208 T16 Pikes Peak. That car was more than 700 pounds (325kg) lighter than the PP03; maybe with some battery development, an EV will beat Loeb's time.

Tuesday, June 30, 2015

Rebooting the Automobile

As reported by MIT Technology Review: “Where would you like to go?” Siri asked.

It was a sunny, slightly dreamy morning in the heart of Silicon Valley, and I was sitting in the passenger seat of what seemed like a perfectly ordinary new car. There was something strangely Apple-like about it, though. There was no mistaking the apps arranged across the console screen, nor the deadpan voice of Apple’s virtual assistant, who, as backseat drivers go, was pretty helpful. Summoned via a button on the steering wheel and asked to find sushi nearby, Siri read off the names of a few restaurants in the area, waited for me to pick one, and then showed the way on a map that appeared on the screen.

The vehicle was, in fact, a Hyundai Sonata. The Apple-like interface was coming from an iPhone connected by a cable. Most carmakers have agreed to support software from Apple called CarPlay, as well as a competing product from Google, called Android Auto, in part to address a troubling trend: according to research from the National Safety Council, a nonprofit group, more than 25 percent of road accidents are a result of a driver’s fiddling with a phone. Hyundai’s car, which goes on sale this summer, will be one of the first to support CarPlay, and the carmaker had made the Sonata available so I could see how the software works.


CarPlay certainly seemed more intuitive and less distracting than fiddling with a smartphone behind the wheel. Siri felt like a better way to send texts, place calls, or find directions. The system has obvious limitations: if a phone loses the signal or its battery dies, for example, it will stop working fully. And Siri can’t always be relied upon to hear you correctly. Still, I would’ve gladly used CarPlay in the rental car I’d picked up at the San Francisco airport: a 2013 Volkswagen Jetta. There was little inside besides an air-conditioning unit and a radio. The one technological luxury, ironically, was a 30-pin cable for an outdated iPhone. To use my smartphone for navigation, I needed a suction mount, an adapter for charging through the cigarette lighter, and good eyesight. More than once as I drove around, my iPhone came unstuck from the windshield and bounced under the passenger seat.

Android Auto also seemed like a huge improvement. When a Google product manager, Daniel Holle, took me for a ride in another Hyundai Sonata, he plugged his Nexus smartphone into the car and the touch screen was immediately taken over by Google Now, a kind of super-app that provides recommendations based on your location, your Web searches, your Gmail messages, and so on. In our case it was showing directions to a Starbucks because Holle had searched for coffee just before leaving his office. Had a ticket for an upcoming flight been in his in-box, Holle explained, Google Now would’ve automatically shown directions to the airport. “A big part of why we’re doing it is driver safety,” he said. “But there’s also this huge opportunity for digital experience in the car. This is a smart driving assistant.”

CarPlay and Android Auto not only give Apple and Google a foothold in the automobile but may signal the start of a more significant effort by these companies to reinvent the car. If they could tap into the many different computers that control car systems, they could use their software expertise to reimagine functions such as steering or collision avoidance. They could create operating systems for cars.

Google has already built its own self-driving cars, using a combination of advanced sensors, mapping data, and clever navigation and control software. There are indications that Apple is now working on a car too: though the company won’t comment on what it terms “rumors and speculation,” it is hiring dozens of people with expertise in automotive design, engineering, and strategy. Vans that belong to Apple, fitted with sensors that might be useful for automated driving, have been spotted cruising around California.

After talking to numerous people with knowledge of the car industry, I believe an Apple car is entirely plausible. But it almost doesn’t matter. The much bigger opportunity for Apple and Google will be in developing software that will add new capabilities to any car: not just automated driving but also advanced diagnostics and over-the-air software upgrades and repairs. Already, a button at the bottom of the Android Auto interface is meant for future apps that could show vehicle diagnostics. Google expects these apps to be made by carmakers at first, showing more advanced vehicle data than the mysterious engine light that flashes when something goes wrong. Google would like to make use of such car data too, Holle says. Perhaps if Android Auto knew that your engine was overheating, Google Now could plan a trip to a nearby mechanic for you.

At least for now, though, the Google and Apple services essentially can read only basic vehicle data like whether a car is in drive, park, or reverse. Carmakers won’t let those companies put their software deeper into the brains of the car, and whether that will change is a crucial question. After all, modern cars depend on computers to run just about everything, from the entertainment console to the engine pistons, and whoever supplies the software for these systems will shape automotive innovation. Instead of letting Apple and Google define their future, carmakers are opening or expanding labs in Silicon Valley in an attempt to fend off the competition and more fully embrace the possibilities offered by software.

The car could be on the verge of its biggest reinvention yet—but can carmakers do it themselves? Or will they give up the keys?

Cultural shift
Cars are far more computerized than they might seem. Automakers began using integrated circuits to monitor and control basic engine functions in the late 1970s; computerization accelerated in the 1980s as regulations on fuel efficiency and emissions were put in place, requiring even better engine control. In 1982, for instance, computers began taking full control of the automatic transmission in some models.

New cars now have between 50 and 100 computers and run millions of lines of code. An internal network connects these computers, allowing a mechanic or dealer to assess a car’s health through a diagnostic port just below the steering wheel. Some carmakers diagnose problems with vehicles remotely, through a wireless link, and it’s possible to plug a gadget into your car’s diagnostic port to identify engine problems or track driving habits via a smartphone app.


However, until now we haven’t seen software make significant use of all these computer systems. There is no common operating system. Given that carmakers are preventing CarPlay or Android Auto from playing that role, it’s clear that the auto companies are taking a first crack at it. How successful they are will depend on how ambitious and creative they are. Roughly 10 minutes north of Google’s office, I got to see how one of the oldest car companies is beginning to think about this possibility.
Ford opened its research lab in Palo Alto in January. Located one door down from Skype and just around the corner from Hewlett-Packard, it looks like a typical startup space. There are red beanbags, 3-D printers, and rows of empty desks, which the company hopes to fill with more than a hundred engineers. I met a user-interface designer named Casey ­Feldman. He was perched atop a balance board at a standing desk, working on Ford’s latest infotainment system, Sync 3. It runs software Ford has developed, but the automaker is working on ways to hand the screen over to CarPlay or Android Auto if you plug in a smartphone. Feldman was using a box about the size of a mini-fridge, with a touch screen and dashboard controls, to test the software. He showed how Sync 3 displays a simplified interface when the car is traveling at high speed.

Ford’s first touch-screen interface, called MyFord Touch, didn’t go well. Introduced in 2010, it was plagued by bugs, and customers complained that it was overcomplicated. When Ford dropped from 10th to 20th place in Consumer Reports’ annual reliability ratings in 2011, MyFord Touch was cited as a key problem. The company ended up sending out more than 250,000 memory sticks containing software fixes for customers to upload to their cars.

Besides running apps like Spotify and Pandora Radio, Sync 3 can connect to a home Wi-Fi network to receive bug fixes and updates for the console software. Ford clearly hopes that drivers will prefer its system to either CarPlay or Android Auto, and it’s doing its best to make it compelling. “It’s a cultural shift,” says Dragos Maciuca, the lab’s technical director. The lab wants to incorporate “some of the Silicon Valley attitudes, but also processes” into the automotive industry, he says. “That is clearly going to be very challenging, but that’s why we’re here. It doesn’t make sense that you buy a car, and the first thing you do is buy a $5 suction cup for your phone.”

Ford has been ahead of many automakers in its experimentation. It has come out with a module known as Open XC, which lets people download a wide range of sensor data from their cars and develop apps to aid their driving. A Ford engineer used it to create a shift knob for cars with manual transmission so that the stick lights up or buzzes when it’s time to change gears. But Open XC has not taken off widely, and despite Ford’s best efforts, the company’s overall approach still seems somewhat conservative. Maciuca and others said they were wary of alienating Ford’s vast and diverse customer base.

In February, meanwhile, the chip maker Nvidia announced two new products designed to give cars considerably more computing power. One is capable of rendering 3-D graphics on up to three different in-car displays at once. The other can collect and process data from up to 12 cameras around a car, and it features machine-learning software that can help collision-avoidance systems or even automated driving systems recognize certain obstacles on the road. These two systems point to the huge opportunity that advanced automotive sensors and computer systems offer to software makers. “We’re arguing now you need supercomputing in the car,” Danny Shapiro, senior director of automotive at Nvidia, told me.

One of the cars at Stanford’s Dynamic Design Lab.
If anyone could find a great use for a supercomputer on wheels, it’s Chris Gerdes, a professor of mechanical engineering who leads Stanford University’s Dynamic Design Lab. Gerdes originally studied robotics as a graduate student, but while pursuing a PhD at Berkeley, he became interested in cars after rebuilding the engine of an old Chevy Cavalier. He drove me to the lab from his office in an incredibly messy Subaru Legacy.

Inside the lab, students were working away on several projects spread across large open spaces: a lightweight, solar-­powered car; a Ford Fusion covered in sensors; and a hand-built two-person vehicle resembling a dune buggy. Gerdes pointed to the Fusion. After Ford gave his students a custom software interface, they found it relatively easy to get the car to drive itself. Indeed, the ability to manipulate a car through software explains why many cars can already park themselves and automatically stay within a lane and maintain a safe distance from the vehicle ahead. In the coming years, several carmakers will introduce vehicles capable of driving themselves on highways for long periods. “There are so many things you can do now to innovate that don’t necessarily require that you bend sheet metal,” Gerdes said as we walked around. “The car is a platform for all sorts of things, and many of those things can be tried in software.”

The dune-buggy-like car takes programmability to the extreme. Virtually every component is controlled by an actuator connected to a computer. This means that software can configure each wheel to behave in a way that makes an ordinary road feel as if it were covered with ice. Or, using data from sensors fitted to the front of the car, it can be configured to help a novice motorist react like a race-car driver. The idea is to explore how computers could make driving safer and more efficient without taking control away from the driver completely.

In fact, one small carmaker—headquartered in Silicon Valley—shows how transformational an aggressive approach to software innovation could be.

Drive safely
Tesla Motors, based in Palo Alto, has built what’s probably the world’s most computerized consumer car. The Model S, an electric sedan released in 2012, has a 17-inch touch-screen display, a 3G cellular connection, and even a Web browser. The touch screen shows entertainment apps, a map with nearby charging stations, and details about the car’s battery. But it can also be used to customize all sorts of vehicle settings, including those governing the suspension and the acceleration mode (depending on the model, it goes from “normal” to “sport” or from “sport” to “insane”).

Every few months, Tesla owners receive a software update that adds new functions to their vehicle. Since the Model S was released, these have included more detailed maps, better acceleration, a hill-start mode that stops the car from rolling backwards, and a blind-spot warning (providing a car has the right sensors). Tesla’s CEO, Elon Musk, has said a software patch released this summer would add automated highway driving to suitably equipped models.

These software updates can do more than just add new bells and whistles. Toward the end of 2013, the company faced a safety scare when several Model S cars caught fire after running over debris that ruptured their battery packs. Tesla engineers believed the fires to be rare events, and they knew of a simple fix, but it meant raising the suspension on every Model S on the road. Instead of requiring owners to bring their cars to a mechanic, Tesla released a patch over the airwaves that adjusted the suspension to keep the Model S elevated at higher speeds, greatly reducing the chance of further accidents. (In case customers wanted even more peace of mind, the company also offered a titanium shield that mechanics could install.)

Tesla’s efforts show how making cars more fully programmable can add value well after they roll out of the showroom. But software-defined vehicles could also become a juicy target for troublemakers.


In 2013, at the DEF CON conference in Las Vegas, two computer-security experts, Charlie Miller and Chris Valasek, showed that they could hijack the internal network of a 2010 Toyota Prius and remotely control critical features, including steering and braking. “No one really knows a lot about car security, or what it’s all about, because there hasn’t been a lot of research,” Miller told me. “It’s possible, if you went out and bought a 2013, they’ve done huge improvements—we don’t know. That’s one of the scary things about it.”

A few real-world incidents point to why car security might become a problem. In February 2010, dozens of cars around Texas suddenly refused to start and also, inexplicably, began sounding their horns. The cars had been fitted with devices that let the company that leased them, the Texas Auto Center, track them and then disable and recover them should the driver fail to make payments. Unfortunately, a disgruntled ex-employee with access to the company’s system was using those gadgets to cause havoc.

I asked Gerdes whether concerns over reliability and security could slow the computerization of cars. He said that didn’t have to be the case. “The key question is, ‘How fast can you move safely?’” he says. “The bet that many Silicon Valley companies are making—and that many car companies are making with their Valley offices—is that there are ways to move faster and still be safe.”

Ultimately, the opportunities may well outweigh such concerns. Tesla’s efforts point to how significant software innovation could turn out to be for carmakers. Tesla is even experimenting with connecting the forthcoming autopilot system to the car’s calendar, for example. The car could automatically pull up outside the front door just in time for the owner to drive to an upcoming appointment.

Perhaps this also explains why Apple and Google are now dabbling in vehicle hardware: so they can fully own some people’s driving time even before carmakers decide to open up more aspects of their vehicles. “Clearly Apple and Google would love to be the ones who have the operating system for these future cars,” Gerdes says.

As I drove back to the San Francisco airport, my VW Jetta felt more low-tech than ever. The ride was fairly peaceful, with the Santa Cruz Mountains looming in the distance. Even so, after so much driving, I would’ve been glad had Siri offered to take over.

Monday, June 29, 2015

Paired with AI and VR, Google Earth Could Change the Planet

As reported by Wired: THE JAMES RESERVE is a place where the natural meets the digital.

Part of the San Jacinto mountain range in Southern California, the James is a nature reserve that covers nearly 30 acres. It’s closed to the public. It’s off the grid. Vehicles aren’t allowed. But Sean Askay calls it “one of the most heavily instrumented places in the US.” Robots on high-tension cables drop climate sensors into this high-altitude forest. Bird’s nests include automated cameras and their own sensors. Overseen by the University of California, Riverside, the reserve doubles as a research field station for biologists, academics, and commercial scientists.

In 2005, as a master’s student at the university, Askay took the experiment further still, using Google Earth to create a visual interface for all those cameras and sensors. “Basically, I built a virtual representation of the entire reserve,” he says. “You could ‘fly in’ and look at live video feeds or temperature graphs from inside a bird box.”

Somewhere along the way, the project caught the eye of Google’s Vint Cerf, a founding father of the Internet, and in 2007, Askay moved to Mountain View, California, home to Google headquarters. There, he joined the team that ran Google Earth, a sweeping software service that blends satellite photos and other images to create a digital window onto our planet (and other celestial bodies). Since joining the company, the 36-year-old has used the tool to build maps of war casualties in Iraq and Afghanistan. He put the service on the International Space Station, so astronauts could better understand where they were. Working alongside Buzz Aldrin, he built a digital tour of the Apollo 11 moon landing.

Now, as Google Earth celebrates its 10th anniversary, Askay is taking over the entire project—as lead engineer—following the departure of founder Brian McClendon. He takes over at a time when the service is poised to evolve into a far more powerful research tool, an enormous echo of his work at the James Reserve. When it debuted in 2005, Google Earth was a wonderfully intriguing novelty. From your personal computer, you could zoom in on the roof of your house or get a bird’s eye view of the park where you made out with your first girlfriend. But it proved to be more than just a party trick. And with the rapid rise of two other digital technologies—neural networks and virtual reality—the possibilities will only expand.
Click to Open Overlay Gallery
Through an extension to Google’s Chrome web browser called Earth View, you can view “the most beautiful and striking” satellite images from around the world, “diving in” to places like Cuba. GOOGLE

A Visit to Prague
Neural networks—vast networks of machines that mimic the web of neurons in the human brain—can scour Google Earth in search of deforestation. They can track agricultural crops across the globe in an effort to identify future food shortages. They can examine the world’s oil tankers in an effort to predict gas prices. And it so happens that Google runs one of the most advanced neural networking operations in the world. For Google Earth, Askay says, “machine learning is the next frontier.”

According to Askay’s boss, Rebecca Moore, the company is already using neural networks to examine Google’s vast trove of satellite imagery. “We have the Google Brain,” she says, referring to the central neural networking operation Google has built inside the company, “and we’re doing some experiments.” That’s news. But it’s not that surprising. Two startups—Orbital Insight and Descartes Labs—are already doing much the same thing.

Meanwhile, virtual reality—as exhibited by headsets like Facebook’s Oculus Rift and Google Cardboard—is bringing a new level of fidelity and, indeed, realism to the kind of immersive digital experience offered by Google Earth. Today, using satellite imagery and street-level photos, Askay and Google are already building 3-D models of real-life places like Prague that you can visit from your desktop PC (see video at top). But in the near future, this experience will move into Oculus-like headsets, which can make you feel like you’re really there.

“We have so much interesting stuff,” Askay says of Google Earth’s massive collection of images. “How amazing would it be to experience Google Earth in that environment?”

The Google Outreach
Google isn’t the only one that will drive the evolution of Google Earth. In 2007, not long after taking the job at Google, Askay flew to Brazil, helping an indigenous tribe, the Surui, map deforestation in their area of the Amazon, and this gave rise to a wider project called Google Earth Engine. With Earth Engine, outside developers and companies can use Google’s enormous network of data centers to run sweeping calculations on the company’s satellite imagery and other environmental data, a digital catalog that dates back more than 40 years.

“So, if you want to look at 40 years of Landsat imagery and do change projection over time, you can,” Askay says. “You could do retrospective models of where deforestation took place and how fast, as well as predictive models and even near real-time detection. We’re getting to the point where we can start sending alerts saying that something that looks like deforestation has occurred in the last three days.”

As it stands, Earth Engine is only available to a limited number of outsiders, but Askay and Moore say Google plans to gradually open it up to a much larger audience. With a project called Map of Life, independent researchers are already examining how global warming in changing the habitat ranges of particular animal species. Others are working to track water resources. The World Resources Institute now uses the service to provide a map of deforestation not only in the Amazon, but across the globe.

Learning Gets Deep
At the moment, this map is generated with traditional computing techniques. But according to WIR chief technology officer Aaron Steele, the organization is now building a system that can expand, accelerate, and improve the process with neural networking. At companies like Google and Facebook, “deep learning” has already recognizing faces and objects in online photos uploaded by everyday internet users, and many believe the technology can significantly accelerate the analysis of satellite imagery—and to much greater effect.

“It’s a real breakthrough in computer vision, though in order to make it work, you need a whole lot of compute power,” says Steven Brumby, a former Los Alamos National Lab researcher and co-founder of Descartes Labs. “The opportunity is enormous.”

In a recent months, Google has shown how effective this AI technology can be in analyzing ground-level photos captured as part of its Street View project. In building its online maps, Google once used human editors to identify addresses in photos. Now, neural networking does this automatically.

James Crawford, an ex-Googler and the CEO of Orbital Insights, a company dedicated to examining satellite imagery with neural networks, agrees that Google Earth is ripe for this kind of automatic analysis. But after using Google Earth Engine as an outside developer, he says the service may need to evolve further before it’s suitable to such work. “The level of control we need in our pipeline, it didn’t really work for us,” he says. “But that could change.”

The Cardboard Effect
Elsewhere at Google, an eclectic team of engineers are building a virtual reality system for use with headsets that strap around the eyes. It began with a “20 percent” project called Google Cardboard, a headset made out of cardboard—literally—that works in tandem with your smartphone. In the beginning, this seemed like an ironic comment on the wider push toward virtual reality. But it has evolved into something far more serious.

These engineers have designed 16-lens cameras that can capture 360-degree stereoscopic images of the real world. GoPro will soon offer these cameras to the world at large. And Google is offering a service that can stitch those images into a 360-degree digital environment for viewing in Cardboard and, potentially, other headsets. “We have ambitions beyond just Cardboard,” project leader Clay Bavor told us last month. “There are many other things going on.”

The project is natural fit for Google Earth. The Cardboard team recently unveiled what it calls Google Expeditions, a way for school students to experience distant lands via the headset, and the concept is that much more attractive in the context of the earth as a whole. “You can imagine this as a Google Earth experience,” Askay says.

For Moore, this kind of thing isn’t a novelty. It’s education. “It’s not just for fun and gaming,” she says. “It can give people a more immersive understanding of the planet—places that matter and places that are changing.”
Street View now includes “special collections” that show off places like the Great Barrier Reef. GOOGLE

A New GlobeIt’s also worth remembering that Google now has its own satellite company. Last summer, it acquired Skybox, a startup that uses cube satellites to take more frequent and higher resolution photos from the skies. According to Askay and Moore, Google hasn’t yet incorporated the Skybox imagery into Google Earth. But this will come.

On one level, the prospect of pairing Skybox with technology such as neural networks is a frightening thing—another erosion of privacy in the physical world. But it also brings enormous possibilities. Today, Google Earth is a nice way to look at the planet—not to the mention Mars, the Moon, and the heavens. But in the years to come, it will grow into something else. Virtual reality will bring new fidelity. And AI and other types of data analysis will bring a new understanding our planet.

“What I’m looking forward to is combining Google Earth with the kind of dynamic data coming out of Earth Engine—data on deforestation, floods, temperatures,” Moore says. “If you render that kind of information on Google Earth, it becomes a living, breathing dashboard of the planet. You can put in everyone’s hands, not just charts and graphs of what’s going on, but high-resolution information that’s sitting, almost literally, on the surface of the earth.” It’s like Askay’s work at the James Reserve. But on much larger scale.

June 30th Gets a Leap Second Because Earth's Rotation is Slowing Down

As reported by GizmodoIf you’re the sort of person who lives by the motto that every second counts, next week, you get to put your money where your mouth is. That’s because, as we first learned back in January, we’re all being gifted a leap second on June 30th.

Leap seconds can wreak havoc across the Internet, but, as NASA explained in detail this week, they’re essential in order to compensate for our planet’s slowing rotation.
Most of us live our lives in the steadfast world of coordinated universal time (UTC), where Earth days are treated as precisely 86,400 seconds long. But in the real world, days haven’t been that long since about 1820. That’s because a gravitational tug-of-war between the Earth and the moon is causing our planet’s rotation to slow down, making the days a wee bit longer as the years roll on. Today, the average day is approximately 86,400.002 seconds long.
You might be thinking: Okay, that’s interesting, but who’s really counting? Scientists, of course! As NASA explains in the video below, Earth scientists monitor precisely how long it takes our planet to complete a full rotation (i.e., a day) using a technique called Very Long Baseline Interferometry (VLBI). This essentially involves collating data from a worldwide network of stations every single day. And the results are not always predictable. Day length, it turns out, is influenced by everything from tectonic activity to groundwater to El Nino events.
Because day length as measured by VLBI pretty much never hits 86,400 seconds on the nose, scientists have created a second time standard, Universal Time 1, based on the Earth’s precise rotation. When UT1 and UTC drift too far apart, leap seconds are added to keep the two time scales within 0.9 seconds of each other. That’s why, as the hour approaches midnight on Tuesday, the clock will strike 23:59:60 before rolling over to 0:0:0 on July 1st.
Try to make the most of that extra second, even if the Internet has an aneurysm. After all, not even Earth’s brightest scientists can predict when the next one’s coming.
The prevent any Internet mayhem, Google has been using a technique called 'leap smear' so that there will be no confusion by computer systems over the time during the leap second addition.  Still there may be some issues: the Chinese BeiDou 'GPS' system day numbering could cause some errors for GNSS systems utilizing that location and timing system.

Sunday, June 28, 2015

World’s First 3D Printed Supercar is Unveiled

As reported by 3dPrint: The automobile industry has been relatively stagnant for the past several decades. While new car designs are released annually, and computer technology has advanced by leaps and bounds, the manufacturing processes and the effects that these processes have on our environment have remain relatively unchanged. Over the past decade or so, 3D printing has shown some promise in the manufacturing of automobiles, yet it has not quite lived up to its potential, at least according to Kevin Czinger, founder and CEO of a company called Divergent Microfactories (DM).
dm1
The Blade 3D Printed Supercar
Today, at the O’Reilly Solid Conference in San Francisco, Kevin Czinger is about to shock the world with a keynote presentation he will give titled, “Dematerializing Auto Manufacturing.”
“Divergent Microfactories is going to unveil a supercar that is built based on 3D printed parts,” Manny Vara of LMG PR tells 3DPrint.com. “It is very light and super fast — can you say faster acceleration than a McLaren P1, and 2x the power-to-weight ratio of a Bugatti Veyron? But the car itself is only part of the story. The company is actually trying to completely change how cars are made in order to hugely reduce the amount of materials, power, pollution and cost associated with making traditional cars.”
The vehicle, called the Blade, has 1/3 the emissions of an electric car and 1/50 the factory capital costs of other manufactured cars.  Unlike previous 3D printed vehicles that we have seen, such as Local Motors’ car that they have printed several times, DM’s manufacturing process differs quite a bit. Instead of 3D printing an entire vehicle, they 3D print aluminum ‘nodes’ which act in a similar fashion to Lego blocks. 3D printing allows DM to create elaborate and complex shaped nodes which are then joined together by off-the-shelf carbon fiber tubing. Once the nodes are printed, the chassis of a car can be completely assembled in a matter of minutes by semiskilled workers. The process of constructing the chassis is one which requires much less capital and other resources, and doesn’t require the extremely skilled and trained workers that other car manufacturing techniques rely on. The important goal that DM is striving for, and it appears they have accomplished, is the reduction of pollution and environmental impact.
Individual 3D printed aluminum nodes
Individual 3D printed aluminum nodes
Today, Czinger and the rest of the team at Divergent Microfactories will be unveiling their first prototype car, the Blade.
“Society has made great strides in its awareness and adoption of cleaner and greener cars,” explains CEO Kevin Czinger. “The problem is that while these cars do now exist, the actual manufacturing of them is anything but environmentally friendly. At Divergent Microfactories, we’ve found a way to make automobiles that holds the promise of radically reducing the resource use and pollution generated by manufacturing. It also holds the promise of making large-scale car manufacturing affordable for small teams of innovators. And as Blade proves, we’ve done it without sacrificing style or substance. We’ve developed a sustainable path forward for the car industry that we believe will result in a renaissance in car manufacturing, with innovative, eco-friendly cars like Blade being designed and built in microfactories around the world.”
Assembling of the 3D printed nodes and carbon fiber tubing to construct the chassis
Assembling of the 3D printed nodes and carbon fiber tubing to construct the chassis
The Blade is one heck of a supercar, capable of going from 0-60 MPH in a mere 2.2 seconds. It weighs just 1,400 pounds, and is powered by a 4-cylinder 700-horsepower bi-fuel internal combustion engine that is capable of using either gasoline or compressed natural gas as fuel. The car chassis is made up of approximately 70 3D printed aluminum nodes, and it took only 30 minutes to build the chassis by hand. The chassis itself weighs just 61 pounds.
“The body of the car is composite,” Vara tells us. “One cool thing is that the body itself is not structural, so you could build it out of just about any material, even something like spandex. The important piece, structurally, is the chassis.”
Kevin Czinger, Founder and CEO, Divergent Microfactories, Inc. with the Blade Supercar
Kevin Czinger, Founder and CEO, Divergent Microfactories, Inc. with the Blade Supercar
The initial plan is for DM to scale up to an annual production of 10,000 of these limited supercars, making them available to potential customers. This isn’t all though, as DM doesn’t merely plan on just being satisfied by manufacturing cars via this method. They plan on making the technology available to others as well. On top of selling these supercars, they will also sell the tools and technologies so that small teams of innovators and entrepreneurs can open microfactories and build their own cars, based on their own unique designs. Whether it is a sedan, pickup truck or another type of supercar, it is all possible with this proprietary 3D printed node technology.
Pre-painted Blade supercar
Pre-painted Blade supercar
The node-enabled chassis of cars built using this unique 3D printing method, are up to 90% lighter, much stronger, and more durable than cars built with more traditional techniques. Could we be looking at a great ideology change within the automobile manufacturing industry? Lighter, stronger, more durable, more affordable, environmentally satisfying vehicles are definitely something that just about anyone should consider a step in the right direction.
3D printing has been touted as a technology of the future, for the future, enabling individual customization of many products. Now, the ability for entrepreneurs to enter an industry previously overrun by huge corporations could mean a future with individualized, custom vehicles which perform and appear just the way we want them. If Divergent Microfactories has a say, this will be our future, and that future isn’t too far off.
pre-painted Blade supercar
Pre-painted Blade supercar
What do you think about this 3D printed supercar? Do you like the idea of entrepreneurs having an opportunity to fabricate their own line of vehicles? Is DM onto something with this unique method of automobile manufacturing? Discuss in the Divergent Microfactories 3D Printed Supercar Forum thread on 3DPB.com.  Check out the video below.

blade1