Search This Blog

Friday, January 29, 2016

Army Demonstrates Autonomous Vehicle Capabilities at Detroit Auto Show

As reported by Popular Mechanics: The military is continuing to experiment with autonomous vehicle technologies. Earlier this month, the U.S. Army successfully deployed a fully-autonomous ground vehicle by having it flown in by a fully-autonomous helicopter. Now the Army has revealed that autonomous driving technology can be used in a number of their trucks and military vehicles.

At the Detroit Auto Show, the U.S. Army Tank Automotive Research, Development, and Engineering Center (TARDEC) showed off one of its autonomous vehicles. In addition to cutting down on required personnel by having a convoy of autonomous vehicles follow one human driver, driverless vehicles could also navigate areas with a high number of IEDs or other hazards without risking human life. A new video from Stars and Stripes takes a look at the new driverless Army trucks. 
Just like Google's driverless cars, the autonomous military vehicles use a LIDAR system to create a three-dimensional world map and navigate around obstacles. The Army's large autonomous trucks are still being developed—they're a little harder to fine tune than the little pods that Google has out on the roads—but finding new ways to perform wartime operations while keeping soldiers out of harm's way is one of the Army's top priorities.

Droneboarding: The Sport We Should Have Seen Coming

As reported by The VergeOkay, there's a lot going on here so let's break things down. Is it impractical to use quadcopters to tow a toddler on a snowboard? Yes, very. Did the battery on the drone in this video last long? Probably not. Did the kid go very fast? He's barely moving. Is this a great idea? Of course, of course, of course.
Using a consumer grade drone to tow people on snow (or on water?) makes about as much sense as strapping a giant fan to someone's back while they're paragliding — but people do that and it looks like great fun. And while drones may be pretty weak with batteries that only last tens of minutes, they are at least getting stronger, and we can definitely imagine a team of drones pulling a fully-grown human. Maybe they could do it on a sled? Like huskies? Hunting down rogue robots through the future-frozen wastelands of middle America after the coming Ice Age / AI revolt??
Okay, that's too far, but this is still pretty alright.

Thursday, January 28, 2016

The Artificial Intelligence Technology That Solved 'Go' Will be Accessible In and Through Your Smartphone

As reported by TechRepublic:Google has developed a machine learning system capable of mastering Go - an ancient Chinese game whose complexity stumped computers for decades.
While IBM's Deep Blue computer mastered chess in the mid 1990s and in more recent years a system built by Google's DeepMind lab has beaten humans at classic 70s arcade games - Go was a different matter.
go.jpgGo has 200 moves per turn compared to 20 per turn in Chess. Over the course of a game of Go there are so many possible moves that searching through each of them to identify the best play is too costly from a computational point of view.
Now a system developed by Google DeepMind has beaten European Go champion and elite player Fan Hui. Rather than being programmed in how to play the game, the AlphaGo system learned how to do so using two deep neural networks and an advanced tree search.
Go is typically played on a 19-by-19-square board and sees players attempt to capture empty areas and surround an opponent's pieces. To teach the system how to play the game, moves from 30 million Go games played by human experts were fed into AlphaGo's neural networks. The system then used reinforcement learning to work out the type of moves that were most likely to succeed, based on these past matches. This approach allows AlphaGo to restrict the number of possible moves it needs to search through during a game - making the process more manageable.
DeepMind CEO Demis Hassibis described Go as "probably the most complex game that humans play. There's more configurations of the board then there are atoms in the universe."
It was that complexity that meant the game had been so difficult for machines to master said DeepMind's David Silver. "In the game of Go we need this amazingly complex intuitive machinery, which people previously thought was only available in the human brain, to even have the right idea of who's ahead and what the right move is."
Google has suggested that the approach used by AlphaGo to learn how to master Go could be extended to solving more weighty problems, such as climate change modelling, as well as to improving Google's interactions with users of its services.
For instance, DeepMind's Silver suggests the technology could help personalize healthcare by using a similar reinforcement learning technique to understand which treatments would "lead to the best outcomes for individual patients based on their particular track record and history".

 More significantly, Hassabis sees the achievement as progress towards an even grander goal, of building an AI with the same general capabilities and understanding as humans.

"Most games are fun and were designed because they're microcosms of some aspect of life. They might be slightly constrained or simplified in some way but that makes them the perfect stepping stone towards building general artificial intelligence."
Similar AI initiatives are underway at tech giants across the world, with Facebook recently revealing its deep learning system's ability to recognize people and things in images and to predict real-world outcomes, such as when a tower of blocks will topple.

Why Google is pursuing narrow, not general, AI

Simon Stringer, director of the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, said that AlphaGo and other deep learning systems are good at specific tasks - be that spotting objects or animals in photos or mastering a game. But these systems work very differently from the human brain and shouldn't be viewed as representing progress towards developing a general, human-like intelligence - which he believes requires an approach guided by biology.
"If you want to solve consciousness you're not going to solve it using the sorts of algorithms they're using," he said.
"We all want to get to the moon. They've managed to get somewhere up this stepladder, ahead of us, but we're only going to get there by building a rocket in the long term.
"They will certainly develop useful algorithms with various applications but there will be a whole range of applications that we're really interested in that they will not succeed at by going down that route."
In the case of DeepMind, Stringer says the reinforcement learning approach used to teach systems to play classic arcade games and Go has limitations compared to how animals and human acquire knowledge about the world.
Whereas these reinforcement learning algorithms can learn to map which actions lead to the best outcomes they are "model-free", meaning the system "knows nothing about its world".
That approach is very different to how a rat's brain enables it to navigate a maze, he said.
"It's been shown over a half a century ago that what rats do is learn about the structure of their environment, they learn about the spatial structure and the causal relations in their world and then, when they want to get from A to B, they juggle that information to create a novel sequence of steps to get to that reward."
When you teach a system using model-free reinforcement learning, Stringer says it's "behaviorally very limiting".
"As the environment changes, for example one route is blocked off, the system doesn't know anything about its world so it can't say 'This path is blocked, I'm going to take the next shortest one'. It can't adapt but rats can."
Similarly, Google's announcement a few years back that it had trained a neural network to spot cats in images doesn't represent a step towards developing a human-like vision system.
"When we look at a cat, we're not just aware there's a cat in the image, we see all of the millions of visual features that make up that cat and how they're related to each other. In other words our visual experience is much richer than one of these deep learning architectures, which simply tells you whether there's a particular kind of feature in an image."
In particular, such systems lack the human ability to bind features together - he said - to comprehensively understand how features in an image are related to one another. Deep learning neural networks also generally don't model biological systems that appear to play a key role in how humans assign meaning to the world. These models typically exclude, for example, feedback in the brain's visual cortex and the precise timings in the electrical pulses between neurons, he said, adding that the centre in Oxford had developed concrete theories about the importance of these features in the visual cortex.
"We bought all of those elements together. At the very least it gives us a deep insight into what is so special about human vision that hasn't been captured in artificial vision systems yet."
This biologically-inspired approach is very different to that taken by DeepMind but Stringer believes it is necessary to have a chance of one day cracking general artificial intelligence.
"If you want to solve consciousness you're not going to solve it using the sorts of algorithms they're using."
The downside is that Stringer believes the ultimate payoff for his research will be a long time coming, a factor he thinks has driven DeepMind's decision to focus on narrow AI that could be applicable in the near-future.
"I have to admit, I'm always a bit surprised, given the resources that DeepMind have, why they don't devote more resources to actually trying to recreate the dynamics of brain function and I think it's because when you're trying to raise funding you need to produce jam today, you need these algorithms to work quickly otherwise that tap gets turned off.
With that in mind, Google recently announced that it will use Movidius's processors to power its advanced neural computation engine on mobile devices and in turn help the chip maker with it's neural network technology.
Google plans to use Movidius's flagship MA2450 chip, touted as the only commercial solution on the market today that can perform complex neural network computations in "ultra-compact" form factors.
Stringer says: "My aim is to produce the first prototypical conscious systems, something very simple, somewhere between a mouse and a rat, within the next 20 - 30 years."
The DeepMind software that beat Go champion Hui, in a match that took place last October, was running on Google Cloud Platform and reportedly distributed across about 170 GPUs (graphics processing units) and 1,200 CPUs (central processing units).
Google has also been experimenting with Cloud Vision Tech, through a mobile accessible API.
The next major challenge for Google's AlphaGo will come in March, when it will play the world's reigning Go champion Lee Sedol.
DeepMind's Silver is confident AlphaGo has what it takes to beat all comers, at least in the long run.
"A human can perhaps play 1,000 games a year, AlphaGo can play through millions of games every single day. It's at least conceivable that as a result AlphaGo could, given enough processing, given enough training, given enough search power, reach a level that's beyond any human."

Wednesday, January 27, 2016

SpaceX Just Announced Details for Its First Hyperloop Test

As reported by Yahoo NewsElon Musk is taking another step forward with his innovative high-speed public transportation system, aptly named the Hyperloop. On Tuesday, his company SpaceX announced a partnership with the world's largest design and construction firm Aecom to build a one-mile test track in Hawthorne, California for this year's Hyperloop pod competition, Tech Insider reported. 
The open competition, which was announced on June 15, kicks off this weekend with showcasing the over 100 pod designs. The selected pods will compete at the new track sometime in the summer of 2016, the SpaceX website said. The vacuum-sealed track is slated to be six feet wide and one mile in length, and since it's a test track, a 12-foot long foam pit will be at the end in case a pod has a brake failure, an Aecom statement from Tuesday said. 
The Hyperloop was announced in 2013, as Tesla and SpaceX's Elon Musk's "train of the future," aka the fastest way to travel. In Musk's vision, the depressurized tube-based train will travel at 760 mph, and carry up to 840 passengers an hour, or 7.6 million a year. The cost of building it was estimated at $6 billion in 2013, Mic reported. Musk declared the Hyperloop a public challenge, releasing a 60-page document on the project, also emphasizing that SpaceX will not pursue the Hyperloop itself, Tech Insider reported.
Hyperloop Competition Accelerates
View gallery
SpaceX Just Announced Details for Its First Hyperloop Test
Source: YouTube
"What we are delivering is more than just a track to test pod prototypes; it's a glimpse into the future," Aecom CEO Michael Burke said in the statement. Aecom worked with Brooklyn, New York's Barclay's Center and the Crossrail tunnel in London, the Verge reported.
Since the Hyperloop's announcement, two independent companies have sprung up in hopes of commercializing the high-speed transportation business: Hyperloop Technologies and Hyperloop Transportation Technologies. On Jan. 8, Hyperloop Technologies announced that it received its pipes to build a test tube in Nevada that could be running by 2020. But Hyperloop Transportation Technologies wasn't about to sit still; soon after, on Jan. 21, the company announced it will build its own 5-mile test track in its sustainably built Paradise called Quay Valley by Fresno, California. 
Although technically unaffiliated, Hyperloop Technologies is sponsoring the pod competition, contributing $150,000 for the winning prizes, the Verge reported Jan. 15. 

Tuesday, January 26, 2016

Discrepancy Detected in GPS Time (Updated)

As reported by Aalto University: Aalto University's Metsähovi observatory located in Kirkkonummi, Finland, detected a rare anomaly in time reported by the GPS system (Google translation). 

The automatic monitoring system of a hydrogen maser atomic clock triggered an alarm which reported a deviation of 13.7 microseconds. While this is tiny, it is a sign of a problem somewhere, and does not exclude the possibility of larger timekeeping problems happening. 

The specific source of the problem is not known, but potential candidates are a faulty GPS satellite or an atomic clock in one the satellites. Particle flare-up from the sun is unlikely, as the observatory has currently not detected unusually high activity from sun.

(Update 17:25 01/28/2016 MST) ITNews reports A time spike in the global positioning system which rippled through the world yesterday was caused by a satellite launched in 1990 failing and triggering a software bug, United States officials have confirmed.

Other radio observatories such as Jodrell Bank in Britain and ATCA in Australia confirmed the 13 microsecond error, and speculation rose that it might have been caused by an older satellite, SVN 23, failing and being decommissioned.

Although the timing anomaly measured just microseconds, it could have caused significant navigation errors, Richard Easther, head of the University of Auckland's physics department said.

"The rule of thumb is that for every nanosecond of error, you could be out by as much as a foot," Easther said.

"An error of 13 microseconds or 13,000 nanoseconds works out as just under four kilometers."

The United States Air Force confirmed that the GPS anomaly was caused by the SVN 23 satellite failing.

A spokesperson for the USAF 50th Space Wing at Schriever Air Base, Colorado, said the issue started on January 26 12.49 am local time when verified users experienced GPS timing issues.

"Further investigation revealed an issue in the global positioning system ground software which only affected the time on legacy L-band signals," a spokesperson said.

"This change occurred when the oldest vehicle, SVN 23, was removed from the constellation [of GPS satellites orbiting Earth].

"While the core navigation systems were working normally, the coordinated universal time timing signal was off by 13 microseconds which exceeded the design specifications.

"The issue was resolved at 6.10 am Mountain Standard Time, however global users may have experienced GPS timing issues for several hours."

The 50th Space Wing said operator procedures were modified to prevent a repeat of the GPS timing anomaly, until the ground system software has been corrected. An operational review will also be conducted into procedures and impacts on users.

“No reports of issues with GPS-aided munitions were reported by the US Joint Space Operations Center at Vandenberg Air Force Base; the US Strategic Command’s Commercial Integration Cell which operates out of the USJSOC, will act as portal to determine the scope of the GPS error for commercial users."

SVN 23 was launched in 1990 as part of the improved Block IIA GPS 19-satellite constellation. When it failed it was the oldest GPS satellite in operation, its 25-year service time well exceeding the 7.5 year life expectancy.

Monday, January 25, 2016

Insurance Companies Looking for Fallback Plans to Survive Driverless Cars

As reported by The Christian Science Monitor: Self-driving cars, such as the fleet Google has been operating for several years, are still mostly a curiosity. But it seems inevitable that they will become a significant part of the nation’s transportation infrastructure in the near future.

And that could mean a huge downsizing of the auto insurance industry, as the frequency of accidents declines and liability shifts from the driver to the vehicle’s software or automaker. It could also greatly reduce what we pay for car insurance.

Among auto insurers, State Farm is taking the lead in realigning its services with this new landscape.

  • General Motors recently announced that it has partnered with ride-sharing company Lyft in a $500 million plan to create an “on-demand” network of self-driving cars. Uber has also been outspoken in its plan to rely increasingly on autonomous cars.
  • Ford has announced plans to triple its research fleet of self-driving Fusion Hybrid cars (from 10 to 30), boosting speculation that it plans its own autonomous car.
  • Google says its fleet of self-driving cars has logged more than 1 million miles since 2009 with only 12 minor accidents — none of them the fault of the vehicles.
Self-driving cars are not yet commercially available, but autonomous-car technology, such as crash-avoidance systems, is making its way into models from many automakers including Mercedes-Benz, Volvo and Tesla. Research and consulting firm Celent, in a recent report on “the end of auto insurance,” projects that within 20 to 30 years, more than 50% of cars on the road will be autonomous.

KPMG, an advisory and research firm, predicts that these trends mean that within 25 years the personal auto insurance industry could shrink to less than 40% of its current size. If cars are self-driving, perhaps owners will only need to buy car insurance policies that cover car theft and non-crash damage such as hail and floods.
State Farm considers a new role

State Farm, the largest auto insurance company in the country, appears to be making plans to survive in this new world. One possibility could be for the insurance giant to reinvent itself as a “life management company,” as the company put it in a patent application recently published by the U.S. Patent Office.

State Farm’s patent application, “Aggregation and Correlation of Data for Life Management Purposes,” describes how the company could analyze data about a customer’s vehicles, home and personal health, find patterns and offer “personalized recommendations, insurance discounts, and other added values or services that the individual can use to better manage and improve his or her life.”

To that end, State Farm would collect data about:
  • Your home, including security systems, environmental conditions, energy use and home automation.
  • Your vehicle, including use of the vehicle and your physical and mental state while driving. (NerdWallet previously has reported on State Farm’s patent-pending plan to get inside your head while you’re driving.)
  • Your health, including weight, blood pressure, sleeping patterns and fitness activities as reported by “wearable, implantable, ingestible, or hand-held personal health sensors.”

State Farm could use the data to send you advice, alerts, coupons or discounts on insurance or other goods and services, according to the patent application.

In one example given in the application, State Farm’s system might determine you are not sleeping well and correlate that with information that shows your home gets cold at night. The system would suggest that you raise the temperature to sleep more soundly.

Or your personal health metrics might show a high level of stress. The State Farm system might be aware of a recent break-in affecting your home or vehicle and recommend extra security measures to give you more peace of mind.

In response to an inquiry from NerdWallet about the patent application, a State Farm spokesperson said the insurer “takes the privacy of our customers seriously. We do not sell customer information, and we do not allow those who are doing business on our behalf to use our customer information for their own marketing purposes.”

The spokesperson declined to comment specifically on the patent, beyond saying the company is “actively innovating in a number of areas.”
Transforming into a life-management advisor could play to State Farm’s strengths:
  • State Farm has a vast customer base. At the end of 2014 it had 82 million customer accounts, including auto, home, health and life insurance policies and banking accounts.
  • The company also is adept at analyzing huge amounts of data about people, cars, homes, health, pets, weather and much more. It processes about 35,000 claims a day.
  • State Farm has a lot of money. The mutual company had a net worth of $80 billion at the end of 2014, a year in which its subsidiaries generated $4.2 billion in net income on $71.2 billion in revenue. (2015 figures are not yet available.) The company can afford to test new ideas and technologies.
State Farm isn’t the only insurance company eyeing a future in which its expertise in risk assessment is harnessed to provide recommendations and advice to consumers. Travelers, for example, recently applied to patent a device that offers specific suggestions for managing errands and other travel. Customers would be able to see a map of “risk zone” data for places they want to go, such as stores, restaurants and roads. They could then plan the day “with an eye toward how ‘risky’ such endeavors may be,” according to the patent application.

Products and systems described in patent applications may never make it to the consumer. But State Farm’s “life management” patent application fits a pattern for the company. Applications published over the past several years show that State Farm sees a promising future in consumer-data analysis that could allow it to calculate scores for customer behavior, change customers’ daily habits through advice, recommend products and target advertisements based on where you drive.

Auto insurers must adjust to disruption

Donald Light, Celent’s director of North America property/casualty insurance, predicts that as self-driving cars gain momentum, auto insurers will go out of business if they can’t reduce their cost structures — the massive buildings, the armies of agents, the computer systems.

He said auto insurers will have to ask themselves, “Am I OK with being a smaller company? Have I adjusted my cost structure so I survive being smaller?”

Light says it’s unlikely companies that depend on auto insurance premiums will be able to make up the difference by shifting to selling other types of insurance. “There aren’t other kinds of insurance lying in the street waiting to be written,” he says.

Few auto insurance companies have taken serious action to prepare for the gutting of their business, according to a June 2015 KPMG survey. Most senior insurance executives believe that any change will happen far in the future, or not at all, according to the survey. Almost one-third (32%) say the companies they work for have “done nothing” to prepare for the advent of driverless cars. In addition, 23% say they have little or no understanding of driverless cars and only 6% say they have an operational plan to deal with “the end of auto insurance.”

Shifting into new business lines is a possible tactic, says Light, “but I doubt it solves the cost-structure problem,” he says.

For example, say you currently pay $800 a year for car insurance.

“What’s it worth to me to have a personal life manager? It’s not worth $800 a year to me. Maybe it’s worth $100 a year to me. Revenue goes down in a material way,” he says. “Companies need to accept this reality earlier rather than later.”

Forget Blue Origin vs. SpaceX—the Real Battle is Between Old and New Ideas

As reported by ArsTechnicaFriday’s launch of the New Shepard rocket in West Texas renewed the tired debate about whether Blue Origin or SpaceX has achieved more in the reusable spaceflight game. These discussions first flared up in November, when no less than Jeff Bezos and Elon Musk sparred on Twitter over the magnitude of New Shepard’s first flight into space and subsequent vertical landing. Ultimately the debate is vacuous and completely misses the big picture.

Each company has achievements to be proud of. Blue Origin landed first and now has taken the next critical step toward full reusability by reflying a booster. SpaceX also landed vertically, about a month after Blue Origin. SpaceX’s Falcon 9 rocket is a much larger and more powerful booster, flying a more dynamically challenging profile. Technically, its landing was more impressive. The company is also developing this capability while delivering payloads into orbit for NASA and the private sector.
There is no “better.” Both companies are kicking ass. I think a lot of people who read this probably share a common goal with me: we’d like to see wider access to space. We’d like to see colonization of the Moon, or maybe Mars, or maybe beyond. We’d like to see a highway to the stars. There is only one way this happens: dramatically reducing the cost of getting into space. And the way to do this is by reusing your rockets and spacecraft.
When Bezos, Musk, and many of us were kids, we kind of assumed NASA was going to take care of that. Bezos and Musk went into the dot-com business, and after they made their billions, they realized the future promised in Star Trek hadn’t come to pass. The ways humans get into space today hasn’t changed much since the 1960s, when the Russian Soyuz spacecraft began flying.
NASA tried reusability with the space shuttle. But the vehicle had tremendous turnaround costs after each flight. The shuttle had more than 20,000 tiles as part of its heat shield, each individually numbered, each of which had to be checked. As a NASA program, the shuttle relied on multiple large aerospace contractors working on the program. In the end, the reusable vehicle which aimed to slash the cost of access to space to $25 per pound ended up closer to $25,000 per pound.
The space agency still has its expensive, traditional contractors today, but it has given up reusability. NASA’s oft-touted Space Launch System is entirely expendable, including its engines. The same engines, RS-25s, powered the shuttle and were reused. Now they will be fired once during a SLS launch and then thrown away.NASA says it needs this huge rocket to explore Mars, and that only by delivering exceptionally large payloads to space can it stage human missions to Mars. And that may be right. But along with this large rocket, which may cost as much as $2 billion or $3 billion per launch, NASA will need budgets it has not seen since the days of Apollo to get close to Mars. Unfortunately there is no indication that Congress or a new president will be willing to add billions of dollars a year to NASA’s budget. For example, Donald Trump has said he is more interested in potholes than space.
Bezos and Musk appear to have realized this long ago. They have built their space businesses around low-cost, reusable vehicles. SpaceX has already driven down costs in the satellite launch market. And now both men are trying to make the huge leap from their low-cost boosters to fully reusable vehicles.
What's frustrating is that this whole debate has been miscast as Blue Origin versus SpaceX. This is, rather, new ideas and motivations set against the status quo. Bezos and Musk have made their fortunes, and now they have invested some of those resources to try and bring about the futures they thought they were going to experience by now. They want thousands—millions, even—of people to live and work in space.The two dot-com billionaires are working against decades of spaceflight inertia, in both NASA and its mandated civil servant workforce, but even more so in the large, influential aerospace contractors accustomed to large, cost-plus contracts. NASA does amazing things, very hard things like flying past Pluto or building a technical marvel like the International Space Station, but what it has not accomplished during its half century of existence is making space cheap, or fast.
It’s far from clear that Bezos and Musk will succeed as the technological hurdle they are trying to jump is very high. But each is used to finding himself in an upstart position against large, vested interests. Bezos first took on powerful book publishers with Amazon, and later Wal-Mart and other gargantuan retailers to become one of world’s largest online stores. Musk has taken on his share of entrenched competitors, too. PayPal took business from the banking industry, and Tesla has tilted at the automotive and fossil fuel industries.
They are now seeking to do the same to the aerospace industry. Reusable rockets are the disruptive technology of spaceflight. They have the potential to radically cut costs and, within our lifetimes, take us back to the Moon and beyond.
Fortunately, since their initial Twitter dust-up back in November, Bezos and Musk have appeared to bury the hatchet, at least in public. It is true both men have large egos and both want the glory of democratizing space. But they also share a common goal, and they must realize that competition can only spur them to greater heights.
So we should celebrate the achievements of both Blue Origin and SpaceX, not bicker about whose rocket is bigger, went further, or landed first. Both companies are tackling hard problems, largely with private money. Both companies are trying to push the frontier opened by Yuri Gagarin and Alan Shepard more than half a century ago. And if either succeeds, we all win.

Friday, January 22, 2016

SpaceX’s Dragon Capsule for Astronauts Blazes Through Crucial Hover Test

As reported by Yahoo NewsWith every test, with every launch, with every landing – and with every unfortunate fiery explosion – SpaceX is edging toward its dream of creating a space transportation system that drastically reduces the cost of missions and one day could even take it to Mars.

While all the recent attention has been on SpaceX’s Falcon 9 rocket and its landing technology, engineers have also been working hard on developing the latest version of its Dragon capsule.
The spacecraft, which is currently used to take supplies to the International Space Station, always returns to Earth with a splash, dropping into the ocean with its descent slowed by parachutes.
But SpaceX wants it to land on hard ground using thrusters, like it’s been trying with varying degrees of success with its Falcon 9 rocket. While such landings for the capsule would of course eliminate the need for salvage teams to head out to sea, it’s also crucially important if SpaceX has any hope of achieving its long-term aim of missions to Mars, a place where, the last time we looked, no oceans were sloshing around.
Offering a glimpse into its work, the space company this week released a video (above) of a recent test of the SuperDraco thrusters designed to bring the Dragon 2 capsule – the version designed for manned missions – gently back to the ground “with the accuracy of a helicopter.”
As the footage shows, the thrusters all fire up together to raise the spacecraft for a five-second hover, “generating approximately 33,000 lbs of thrust before returning the vehicle to its resting position,” SpaceX said in comments accompanying the video.
The tests, which are taking place at a SpaceX facility in McGregor, Texas, allow engineers to refine the spacecraft’s landing software and systems, NASA said. The first Dragon flights taking astronauts to the International Space Station could take place as early as next year, though for the time being the return trips to Earth are likely to still involve parachute landings in the sea.

Volvo Says it Will Make ‘Death-Proof’ Cars by 2020

As reported by ExtremeTechSwedish automaker Volvo has long kept track of how many people are seriously injured or killed while driving its vehicles. It uses this data to see how much safer it can make its vehicles in the event of a crash. Now, the company has made a bold promise — by 2020 there will be no serious injuries or fatalities in a Volvo car or SUV.

Cars are getting smarter with the addition of autonomous technologies, and this is how Volvo hopes to reach its goal of zero deaths in its cars. This does not, of course, preclude someone from driving recklessly and getting themselves killed. However, conventional driving should be made much safer with the inclusion of a number of technologies. It starts with making the interior of the car safer with improved airbags and restraints. Then things get more futuristic.
Volvo already has various smart features in its cars, but by combining them all, it becomes much harder to end up in a serious accident. Adaptive cruise control for example, is already available on many cars. It allows you to set a maximum speed, but uses radar to maintain a safe distance from the car in front of you. It can even apply the brakes if need be. This can be taken a step further with full collision avoidance. When a crash is likely, the driver will be warned. If action isn’t taken, the car can begin braking to avoid, or at least minimize the impact.
A relatively new technology that Volvo plans to make extensive use of is lane assistance. Cars will use cameras to detect lanes and alert the driver if they begin to drift. This has been found to dramatically reduce crashes from dozing off at the wheel and distracted driving. Road signs can be identified by cameras as well to help alert drivers to posted speed limits and upcoming hazards.
car radar
Cameras will also be used to watch for pedestrians in the vicinity of the vehicle. This is similar to the technology that is used in self-driving cars to identify potential obstacles on the road. The driver can be alerted if a person is in the car’s path and the brake can be automatically applied. In addition to people, cameras can be used to spot large animals in the roadway. For example, moose are common in Volvo’s home territory, and they’ll really mess your car up. Volvo has created a system that can act to avoid colliding with such a critter, saving both you and it.
Automakers like Ford and Tesla are moving quickly toward fully autonomous vehicles. Then there’s Google’s self-driving car program. Volvo too is in the early stages of driverless tech, and handing control over to a computer when it’s clear something is wrong could be a step in that direction. Proving that vehicles can prevent deaths with automated technologies could go a long way toward convincing the public and regulators that self-driving cars are the best option. Volvo thinks these self-driving cars will be the safest of all.
Still, claiming something to be death-proof seems risky. They said the Titanic was unsinkable, after all.

Thursday, January 21, 2016

Meet FAROS, the Firefighting Drone that flies, Crawls up Walls

As reported by ScienceDailyThe 1974 American disaster film Towering Inferno depicted well the earnest struggles of firefighters engaged in ending a fire at a 138-story skyscraper. To this day, fires at high-rise buildings are considered one of the most dangerous disasters.

Skyscraper fires are particularly difficult to contain because of their ability to spread rapidly in high-occupant density spaces and the challenge of fighting fires in the buildings' complex vertical structure. Accessibility to skyscrapers at the time of the fire is limited, and it is hard to assess the initial situation.

A research team at the Korea Advanced Institute of Science and Technology (KAIST) led by Professor Hyun Myung of the Civil and Environmental Engineering Department developed an unmanned aerial vehicle, named the Fireproof Aerial RObot System (FAROS), which detects fires in skyscrapers, searches the inside of the building, and transfers data in real time from fire scenes to the ground station.
As an extended version of Climbing Aerial RObot System (CAROS) that was created in 2014 by the research team, the FAROS can also fly and climb walls.
The FAROS, whose movements rely on a quadrotor system, can freely change its flight mode into a spider's crawling on walls, and vice versa, facilitating unimpeded navigation in the labyrinth of narrow spaces filled with debris and rubble inside the blazing building.
The drone "estimates" its pose by utilizing a 2-D laser scanner, an altimeter, and an Inertia Measurement Unit sensor to navigate autonomously. With the localization result and using a thermal-imaging camera to recognize objects or people inside a building, the FAROS can also detect and find the fire-ignition point by employing dedicated image-processing technology.
The FAROS is fireproof and flame-retardant. The drone's body is covered with aramid fibers to protect its electric and mechanical components from the direct effects of the flame. The aramid fiber skin also has a buffer of air underneath it, and a thermoelectric cooling system based on the Peltier effect to help maintain the air layer within a specific temperature range.
The research team demonstrated the feasibility of the localization system and wall-climbing mechanism in a smoky indoor environment. The fireproof test showed that the drone could endure the heat of over 1,000° Celsius from butane gas and ethanol aerosol flames for over one minute.
Professor Myung said, "As cities become more crowded with skyscrapers and super structures, fire incidents in these high-rise buildings are life-threatening massive disasters. The FAROS can be aptly deployed to the disaster site at an early stage of such incidents to minimize the damage and maximize the safety and efficiency of rescue mission."
The research team has recently started to enhance the performance of the fireproof design for the exteroceptive sensors including a 2-D laser scanner and a thermal-imaging camera because those sensors could be more exposed to fire than other inside sensors and electric components.

With Latest Launch, India IRNSS En-Route to its Own GPS System by Midyear

As reported by SpaceNewsIn its  first mission for 2016,  the Indian Space Research Organisation on Wednesday successfully launched the fifth satellite of its space-based navigational system that it says will become fully operational by middle of this year.
The nationally televised launch took place at 9:31 a.m. local time from Satish Dhawan Space Center, the country’s spaceport in Sriharikota on India’s southeastern coast.
The 14.2 billion rupee ($212 million) Indian Regional Navigation Satellite System (IRNSS) is a constellation of seven near-identical satellites, three in geostationary orbit fixed above the equator; two in geosynchronous orbit inclined at 29 degrees; and two spares. It is designed to provide positioning service to users in India as well as the region extending up to 1,500 kilometers from its borders. Four of the satellites — IRNSS-1A to 1D — are already in place.
In Wednesday’s launch, ISRO’s Polar Satellite Launch Vehicle injected the fifth satellite — IRNSS-1E — into the sub-geosynchronous transfer orbit with a perigee of 282 kilometers and an apogee of 20,655 kilometers with an inclination of 19.21 degrees with respect to the equator, very close to the intended orbit. “It was the 32nd consecutive success for the ISRO’s workhorse,”  said K.Sivan, director of the Vikram Sarabai Space Centre in Tiruvnathapuram that produced the rocket.
Like its predecessors,  the 1,425-kilogram IRNSS-1E satellite has two payloads:  a navigation payload operating in L5-band and S-band; and a ranging payload consisting of a C-band transponder and retro reflectors for laser ranging. A Rubidium atomic clock is part of the navigation payload for navigation and ranging.  Credit: ISRO
Like its predecessors, the 1,425-kilogram IRNSS-1E satellite has two payloads: a navigation payload operating in L5-band and S-band; and a ranging payload consisting of a C-band transponder and retro reflectors for laser ranging. A Rubidium atomic clock is part of the navigation payload for navigation and ranging. Credit: ISRO
ISRO said the “satellite is in good health and its solar panels have been deployed.” After four orbit-raising maneuvers using the satellite’s onboard motor, it will be positioned at its allotted geosynchronous orbit with a 28.1 degree inclination at 111.75 degrees East longitude, it said.
“We have started the year with a grand success but we have still a long way to go,” ISRO Chairman Kiran Kumar said in a post-launch address. “Two more satellites have to be launched in the next two months to complete our navigational system and we have to test fly the heavy launch Mark-3 version of our Geostationary Satellite Launch Vehicle this year”.
Like its predecessors, the 1,425-kilogram IRNSS-1E satellite has two payloads: a navigation payload operating in L5-band and S-band; and a ranging payload consisting of a C-band transponder and retro reflectors for laser ranging. A Rubidium atomic clock is part of the navigation payload for navigation and ranging. According to ISRO the design of the satellites makes the IRNSS system interoperable with the U.S. GPS and European Galileo systems.
ISRO said the four IRNSS satellites already in space have started functioning from their designated slots and their “signal-in-space” has been validated by various agencies inside and outside the country.
“The current achieved position  accuracy is 20 meters over 18 hours of the day with the four satellites. With the launch of IRNSS-1E and subsequent 1F and 1G in February and March 2016, the IRNSS constellation will be complete for total operational use,” ISRO said.
ISRO said the IRNSS will make available two types of services — a standard positioning service open to all users, and a restricted service with encrypted signals in the bands reserved for authorized users. The IRNSS satellites are designed to operate for 10 years.