Search This Blog

Monday, October 23, 2017

This Material Could Allow NASA Planes to Cross the Country in Under an Hour

A team of researchers from NASA and Binghamton University have identified
boron nitride nanotubes as a material that could be used to boost aircraft travel to
hypersonic speeds.  The problem, however, is that a gram of the material costs $1,000.
As reported by Futurism: Within the next decade, planes could be capable of traveling across the country by hypersonic flight in less than an hour—all it would take is some boron nitride.
A key factor for a vehicle to maintain extremely high speeds is the intense amount of heat generated during travel; for example, the now-retired supersonic Concorde aircraft experienced temperatures of up to 260°F at its lazy cruising speed of 1,534 miles per hour. As such, the materials used to build these aircraft must also be able to withstand very high heat, in addition to being structurally stable and lightweight. A study conducted by researchers from NASA and Binghamton University investigated the properties of nanotubes made using boron nitride, a combination of boron and nitrogen. The study revealed it could potentially be used to make hypersonic travel—speeds above 4,000 miles per hour—possible.



Currently, carbon nanotubes are used in aircraft due to their strength and ability to withstand temperatures up to 400 degrees Celsius (752 degress Fahrenheit). Boron nitride nanotubes (BNNTs), however, can withstand up to 900 degrees Celsius (1652 Fahrenheit). They can also handle high amounts of stress, and are much more lightweight than their carbon counterparts.

The Price of Air Travel
The problem with using BNNTs is their cost. According to Binghamton University Associate Professor of Mechanical Engineering Changhong Ke, coating an aircraft with BNNTs would run a very high price tag.

“NASA currently owns one of the few facilities in the world able to produce quality BNNTs,” said Ke. “Right now, BNNTs cost about $1,000 per gram. It would be impractical to use a product that expensive.”


Despite the high production cost, it’s possible prices will decrease, and production increase, after more studies detail the material’s usefulness. Carbon nanotubes were around the same price 20 years ago, but are now between $10 and $20 per gram. Ke believes something similar will happen with BNNTs.

That said, don’t expect the first application of BNNTs to be for commercial aircraft. They’ll probably be used for military fighter jets first, with commercialized flights to follow after. Hopefully by then, we’ll other other ways to travel quickly: be it by hyperloopElon Musk’s BFR rocket, or China’s plans to build the fastest “flying train.”



Boeing Invests in Near Earth Autonomy to Accelerate Development of Autonomous Aircraft

Boeing's HorizonX division has invested in autonomous tech company Near Earth
Autonomy.  It's the first investment the company has made since being formed in
2016, and it comes alongside a partnership with Near Earth to explore "urban
mobility" products.
As reported by Futurism: Earlier this month, Boeing acquired Aurora Flight Sciences, demonstrating the company’s commitment to incorporating autonomous technology into aircraft designs. Now, the aviation company’s HorizonX Ventures division has announced its investment in Near Earth Autonomy — a company that focuses on technologies that enable reliable autonomous flight — further solidifying its support for these burgeoning technologies.



The move marks the first investment HorizonX Ventures has made since its creation last year, but the relationship between Boeing and Near Earth doesn’t end there. In addition to this investment, the companies are partnering to work on future applications for autonomous tech in sectors like urban mobility with vehicles like flying taxis.

“This partnership will accelerate technology solutions that we feel will be key to unlocking emerging markets of autonomous flight,” said Boeing HorizonX Vice President Steve Nordlund in a statement. “We are excited to begin this partnership with a company with such a depth of experience in autonomy so we can leverage the scale of Boeing to innovate for our customers.”

Near Earth Autonomy's Pedigree
Near Earth Autonomy is led by Sanjiv Singh, the company’s acting CEO. He co-founded the company alongside Marcel Bergerman, Lyle Chamberlain and Sebastian Scherer. Combined, they have over 30 years of experience with autonomous systems designed for land and air vehicles. Two of their most notable achievements include partnering with the U.S. Army in 2010 to develop full-scale autonomous helicopter flights and working with the Office of Naval Research to design an autonomous aerial cargo delivery platform for the U.S. Marines.



“This is an exciting opportunity for Near Earth,” said Singh. “The Boeing HorizonX investment will accelerate the development of robust products and enable access to a broader portfolio of applications for aerial autonomy.”

Flying taxis are becoming increasingly popular in the aerospace industry and many expect that they will change how people get around cities and traffic. At the forefront, we have Dubai, which tested its autonomous flying taxi earlier this year and plans to launch a taxi service before year’s end. Meanwhile, Airbus is aiming to test its electric taxi next year, with German company Lilium hoping to have a series of commercial aircraft released by 2025.

It’s an exciting time for the future of transportation, and it’s possible that soon, the concept of manually driving a car will be a thing of the past.



Thursday, October 19, 2017

Watch this Autonomous Track Loader Excavate Dirt Without a Human Operator

We have self-driving cars, self-driving trucks, self-driving boats, and self-driving buses, so it was only a matter of time before we got self-driving bulldozers.

Built Robotics is a new company coming out of stealth today that aims to disrupt the $130 billion excavation industry with its fleet of autonomous earth movers. Rather than sit in the dusty cab all day, operators can program the coordinates for the size hole that needs digging, then stand off to the side and watch the vehicle do all the work. The startup just raised $15 million to hire engineers and get the product to market.

And much like the self-driving vehicles operated by companies like Waymo and GM, these robot bulldozers and back hoes use sensors like LIDAR and GPS to “see” the world around them. But unlike any of the autonomous cars driving around California or Arizona these days, these heavy movers use specially designed sensors to withstand the massive amounts of vibrations involved in excavation.

Built Robotics is headquartered on almost an entire acre of dirt-filled construction space in a nondescript, fenced-off area in the Dogpatch on the east side of San Francisco, where the robotic construction equipment is refined and tested. Noah Ready-Campbell, CEO and founder, was coming off of several years at Google and then eBay when he decided to leverage his early years watching his contractor father on construction sites into a new business.

“I would spend most summers working for him, painting, scrapping, digging up trash,” Ready-Campbell says. “At the time I hated it, and thought, ‘I’m never going to do this.’”

And in some sense, he still won’t be because his robot bulldozers will be doing all the digging and scrapping. And to those who are concerned about jobs that could be lost to automation, his main argument is safety and productivity. Fatal injuries among construction and extraction occupations rose by 2 percent to 924 cases in 2015, according to the US Bureau of Labor Statistics — the highest level since 2008. Meanwhile, 70 percent of construction firms lack skilled workers, potentially stalling commercial and home building projects.



“It can be boring, too,” Ready-Campbell says. “It’s monotonous work. There are safety issues. It’s easy to zone out, make mistakes, and over excavate a site... I’ve talked to a lot of operators and owners, and most say this is great.”

Self-driving cars are programed to be accurate down to the centimeter, but with autonomous construction equipment, operating within a confined, geofenced space without variables like pedestrians and bicyclists, the software that powers the machines can be much less precise. “We actually need [our machinery] to excavate with dirt and collide with the environment and do its job,” Ready-Campbell says. “You’d never want a self-driving car to collide with its environment.”

His contractor father still took a little convincing. “When I first told dad, he reacted pretty negatively. He was like, ‘Why do you want to steal these guys’ jobs?’” Ready-Campbell says. But after watching the machines in operation, “He’s come around on it.”



DeepMind’s Go-Playing AI Doesn’t Need Human Help to Beat Us Anymore

Google’s AI subsidiary DeepMind has unveiled the latest version of its Go-playing software, AlphaGo Zero. The new program is a significantly better player than the version that beat the game’s world champion earlier this year, but, more importantly, it’s also entirely self-taught. DeepMind says this means the company is one step closer to creating general purpose algorithms that can intelligently tackle some of the hardest problems in science, from designing new drugs to more accurately modeling the effects of climate change.

The original AlphaGo demonstrated superhuman Go-playing ability, but needed the expertise of human players to get there. Namely, it used a dataset of more than 100,000 Go games as a starting point for its own knowledge. AlphaGo Zero, by comparison, has only been programmed with the basic rules of Go. Everything else it learned from scratch. As described in a paper published in Nature today, Zero developed its Go skills by competing against itself. It started with random moves on the board, but every time it won, Zero updated its own system, and played itself again. And again. Millions of times over.

After three days of self-play, Zero was strong enough to defeat the version of itself that beat 18-time world champion Lee Se-dol, winning handily — 100 games to nil. After 40 days, it had a 90 percent win rate against the most advanced version of the original AlphaGo software. DeepMind says this makes it arguably the strongest Go player in history.

“By not using human data — by not using human expertise in any fashion — we’ve actually removed the constraints of human knowledge,” said AlphaGo Zero’s lead programmer, David Silver, at a press conference. “It’s therefore able to create knowledge itself from first principles; from a blank slate [...] This enables it to be much more powerful than previous versions.”

Silver explained that as Zero played itself, it rediscovered Go strategies developed by humans over millennia. “It started off playing very naively like a human beginner, [but] over time it played games which were hard to differentiate from human professionals,” he said. The program hit upon a number of well-known patterns and variations during self-play, before developing never-before-seen stratagems. “It found these human moves, it tried them, then ultimately it found something it prefers,” he said. As with earlier versions of AlphaGo, DeepMind hopes Zero will act as an inspiration to professional human players, suggesting new moves and stratagems for them to incorporate into their game.



As well as being a better player, Zero has other important advantages compared to earlier versions. First, it needs much less computing power, running on just four TPUs (specialized AI processors built by Google), while earlier versions used 48. This, says Silver, allows for a more flexible system that can be improved with less hassle, “which, at the end of the day, is what really matters if we want to make progress.” And second, because Zero is self-taught, it shows that we can develop cutting-edge algorithms without depending on stacks of data.

For experts in the field, these developments are a big part of what makes this new research exciting. That’s is because they offer a rebuttal to a persistent criticism of contemporary AI: that much of its recent gains come mostly from cheap computing power and massive datasets. Skeptics in the field like pioneer Geoffrey Hinton suggest that machine learning is a bit of a one-trick pony. Piling on data and compute is helping deliver new functions, but the current pace of advances is unsustainable. DeepMind’s latest research offers something of a rebuttal by demonstrating that there are major improvements to be made simply by focusing on algorithms.

“This work shows that a combination of existing techniques can go somewhat further than most people in the field have thought, even though the techniques themselves are not fundamentally new,” Ilya Sutskever, a research director at the Elon Musk-backed OpenAI institute, told The Verge. “But ultimately, what matters is that researchers keep advancing the field, and it's less important if this goal is achieved by developing radically new techniques, or by applying existing techniques in clever and unexpected ways.”

An earlier version of AlphGo made headlines when it beat Go champion Lee Se-dol in 2016.  That version learned from humans how to play.  Photo: Google/Getty Images.
In the case of AlphaGo Zero, what is particularly clever is the removal of any need for human expertise in the system. Satinder Singh, a computer science professor who wrote an accompanying article on DeepMind’s research in Nature, praises the company’s work as “elegant,” and singles out these aspects.

Singh tells The Verge that it’s a significant win for the field of reinforcement learning — a branch of AI in which programs learn by obtaining rewards for reaching certain goals, but are offered no guidance on how to get there. This is a less mature field of work than supervised learning (where programs are fed labeled data and learn from that), but it has potentially greater rewards. After all, the more a machine can teach itself without human guidance, the better, says Singh.

“Over the past five, six years, reinforcement learning has emerged from academia to have much more broader impact in the wider world, and DeepMind can take some of the credit for that,” says Singh. “The fact that they were able to build a better Go player here with an order of magnitude less data, computation, and time, using just straight reinforcement learning — it’s a pretty big achievement. And because reinforcement learning is such a big slice of AI, it’s a big step forward in general.”

What are the applications for these sorts of algorithms? According to DeepMind co-founder Demis Hassabis, they can provide society with something akin to a thinking engine for scientific research. “A lot of the AlphaGo team are now moving onto other projects to try and apply this technology to other domains,” said Hassabis at a press conference.

Hassabis explains that you can think of AlphaGo as essentially a very good machine for searching through complicated data. In the case of Zero, that data is comprised of possible moves in a game of Go. But because Zero was not programmed to understand Go specifically, it could be reprogrammed to discover information in other fields: drug discovery, protein folding, quantum chemistry, particle physics, and material design.

Hassabis suggests that a descendant of AlphaGo Zero could be used to search for a room temperature superconductor — a hypothetical substance that allows electrical current to flow with zero lost energy, allowing for incredibly efficient power systems. (Superconductors exist, but they only currently work at extremely cold temperatures.) As it did with Go, the algorithm would start by combining different inputs (in this case, the atomic composition of various materials and their associated qualities) until it discovered something humans had missed.


“Maybe there is a room temperature superconductor out and about. I used to dream about that when I was a kid, looking through my physics books,” says Hassabais. “But there’s just so many combinations of materials, it’s hard to know whether [such a thing exists].”

Of course, this would be much more complicated than simply pointing AlphaGo Zero at the Wikipedia page for chemistry and physics and saying “have at it.” Despite its complexity, Go, like all board games, is relatively easy for computers to understand. The rules are finite, there’s no element of luck, no hidden information, and — most importantly — researchers have access to a perfect simulation of the game. This means an AI can run millions of tests and be sure it’s not missing anything. Finding other fields that meet these criteria limits the applicability of Zero’s intelligence. DeepMind hasn’t created a magical thinking machine.

These caveats aside, the research published today does get DeepMind just a little bit closer to solving the first half of its tongue-in-cheek, two-part mission statement. Part one: solve intelligence; part two: use it to make the world a better place. “We’re trying to build general purpose algorithms and this is just one step towards that, but it’s an exciting step,” says Hassabis.


Tuesday, October 17, 2017

Google’s Machine Learning Software Has Learned to Replicate Itself

As reported by Futurism: Back in May, Google revealed its AutoML project; artificial intelligence (AI) designed to help them create other AIs. Now, Google has announced that AutoML has beaten the human AI engineers at their own game by building machine-learning software that’s more efficient and powerful than the best human-designed systems.

An AutoML system recently broke a record for categorizing images by their content, scoring 82 percent. While that’s a relatively simple task, AutoML also beat the human-built system at a more complex task integral to autonomous robots and augmented reality: marking the location of multiple objects in an image. For that task, AutoML scored 43 percent versus the human-built system’s 39 percent.

These results are meaningful because even at Google, few people have the requisite expertise to build next generation AI systems. It takes a rarified skill set to automate this area, but once it is achieved, it will change the industry. “Today these are handcrafted by machine learning scientists and literally only a few thousands of scientists around the world can do this,” WIRED reports Google CEO Sundar Pichai said. “We want to enable hundreds of thousands of developers to be able to do it.”
Much of metalearning is about imitating human neural networks and trying to feed more and more data through those networks. This isn’t — to use an old saw — rocket science. Rather, it’s a lot of plug and chug work that machines are actually well-suited to do once they’ve been trained. The hard part is imitating the brain structure in the first place, and at scales appropriate to take on more complex problems.

The Future of Machine-Built AI
It’s still easier to adjust an existing system to meet new needs than it is to design a neural network from the ground up. However, this research seems to suggest this is a temporary state of affairs. As it becomes easier for AIs to design new systems with increased complexity, it will be important for humans to play a gatekeeping role. AI systems can easily make biased connections accidentally — such as associating ethnic and gendered identities with negative stereotypes. However, if human engineers are spending less time on the grunt work involved in creating the systems, they’ll have more time to devote to oversight and refinement.

Ultimately, Google is aiming to hone AutoML until it can function well enough for programmers to use it for practical applications. If they succeed in this, AutoML is likely to have an impact far beyond the walls of Google. WIRED reports Pichai stated, at the same event from last week, that “We want to democratize this,” — meaning, the company hopes to make AutoML available outside Google.


Monday, October 16, 2017

Dubai Police will Ride Hoverbikes Straight out of 'Star Wars'

As reported by Mashable: Dubai is aggressively turning itself into a "Future City," putting self-flying taxis in the skies and a facial recognition system in its airport. The Dubai police department's latest ride is now adding another sci-fi transportation staple: the hoverbike.
The Dubai police, which already has luxury patrol carsself-driving pursuit drones, and a robot officer, just announced it will soon have officers buzzing around on hoverbikes, which look like an early version of the speeder bikes used by the scout troopers on Endor in Return of the Jedi. 

The force (see what I did there?) unveiled its new Hoversurf Scorpion craft at the Gitex Technology Week conference, according to UAE English language publication Gulf News. The police force will use the hoverbike for emergency response scenarios, giving officers the ability to zoom over congested traffic conditions by taking to the air. 


The Russian-made craft is fully electric and can handle loads of up to 600 pounds, offering about 25 minutes of use per charge with a top speed of about 43 mph. The Scorpion can also fly autonomously for almost four miles at a time for other emergencies.

The Russian-made craft is fully electric and can handle loads of up to 600 pounds, offering about 25 minutes of use per charge with a top speed of about 43 mph. The Scorpion can also fly autonomously for almost four miles at a time for other emergencies.

Hoversurf CEO Alexander Atamanov announced on Facebook the company and the Dubai police have signed a memorandum of understanding that will allow Hoversurf to begin mass production of the Scorpion crafts in the Dubai area to serve the department. 

The police hoverbikes join the Dubai fire department's use of "jet packs" to fight fires, providing sci-fi solutions to real-life emergencies. Let's just hope the Dubai police are better pilots than the Empire's scout troopers. 



Elon Musk Unveils Tesla Factory Model3 Video to Showcase Full Automation

Tesla CEO Elon Musk has released another video showing the autonomous robots
of the Model 3 production line hard at work.  Musk is likely hoping to prove the
vehicle is not being assembled by hand.
As reported by Futurism: Elon Musk took to Instagram to post another video of Tesla’s team of KUKA industrial robots at work building the Model 3, to showcase Tesla’s focus on automation in its development of electric vehicles. Earlier this week Musk sent out a video of Tesla’s Model 3 assembly line slowed down to 1/10th speed.

Robots Under Pressure
The new video, described as, “Stamping Model 3 body panels (real-time)” shows the automated process of fabricating the electric vehicle’s body at full speed. This comes amid numerous advances and controversies surrounding the automation of production. A few days ago, Jane Kim of the San Francisco Board of Supervisors established a committee dubbed the Jobs of the Future Fund, to explore how best to smooth the transition toward more automation.

Skeptics of an automated future like World Bank Chief Jim Yong Kim warn that humans are in for a job disruption not seen since the industrial revolution, and that we’d best invest in education and health. Kim argues that intelligent automation and reactionary political elements may threaten economic development (e.g., the resistance to forces of globalization)—putting the world, Kim adds, on a “crash course.”

But billionaire entrepreneur and Virgin Group founder Richard Branson thinks we have meaningful alternatives, telling the BI Nordic reporters that a safety net provided by a basic income could help counter the effects of artificial intelligence and increased automation.

Why Elon Musk is Sharing
Musk is likely sending out these videos in response to claims that the Model 3 is largely being built by hand, a claim which Tesla has stated to be “fundamentally wrong and misleading.” The company has been unable to keep pace with production level announced by Musk just this past summer.

Another statement from Tesla said, “We are simply working through the S-curve of production that we drew out for the world to see at our launch event in July. There’s a reason it’s called production hell.”

We can expect more videos of Musk proving the Model 3 is in the hands of an autonomous, streamlined production line. Hopefully, soon we can see the results of the process finally translate into speedy production. The company has so far had difficulty meeting the high demand of the new model.