Search This Blog

Thursday, October 19, 2017

Watch this Autonomous Track Loader Excavate Dirt Without a Human Operator

We have self-driving cars, self-driving trucks, self-driving boats, and self-driving buses, so it was only a matter of time before we got self-driving bulldozers.

Built Robotics is a new company coming out of stealth today that aims to disrupt the $130 billion excavation industry with its fleet of autonomous earth movers. Rather than sit in the dusty cab all day, operators can program the coordinates for the size hole that needs digging, then stand off to the side and watch the vehicle do all the work. The startup just raised $15 million to hire engineers and get the product to market.

And much like the self-driving vehicles operated by companies like Waymo and GM, these robot bulldozers and back hoes use sensors like LIDAR and GPS to “see” the world around them. But unlike any of the autonomous cars driving around California or Arizona these days, these heavy movers use specially designed sensors to withstand the massive amounts of vibrations involved in excavation.

Built Robotics is headquartered on almost an entire acre of dirt-filled construction space in a nondescript, fenced-off area in the Dogpatch on the east side of San Francisco, where the robotic construction equipment is refined and tested. Noah Ready-Campbell, CEO and founder, was coming off of several years at Google and then eBay when he decided to leverage his early years watching his contractor father on construction sites into a new business.

“I would spend most summers working for him, painting, scrapping, digging up trash,” Ready-Campbell says. “At the time I hated it, and thought, ‘I’m never going to do this.’”

And in some sense, he still won’t be because his robot bulldozers will be doing all the digging and scrapping. And to those who are concerned about jobs that could be lost to automation, his main argument is safety and productivity. Fatal injuries among construction and extraction occupations rose by 2 percent to 924 cases in 2015, according to the US Bureau of Labor Statistics — the highest level since 2008. Meanwhile, 70 percent of construction firms lack skilled workers, potentially stalling commercial and home building projects.

“It can be boring, too,” Ready-Campbell says. “It’s monotonous work. There are safety issues. It’s easy to zone out, make mistakes, and over excavate a site... I’ve talked to a lot of operators and owners, and most say this is great.”

Self-driving cars are programed to be accurate down to the centimeter, but with autonomous construction equipment, operating within a confined, geofenced space without variables like pedestrians and bicyclists, the software that powers the machines can be much less precise. “We actually need [our machinery] to excavate with dirt and collide with the environment and do its job,” Ready-Campbell says. “You’d never want a self-driving car to collide with its environment.”

His contractor father still took a little convincing. “When I first told dad, he reacted pretty negatively. He was like, ‘Why do you want to steal these guys’ jobs?’” Ready-Campbell says. But after watching the machines in operation, “He’s come around on it.”

DeepMind’s Go-Playing AI Doesn’t Need Human Help to Beat Us Anymore

Google’s AI subsidiary DeepMind has unveiled the latest version of its Go-playing software, AlphaGo Zero. The new program is a significantly better player than the version that beat the game’s world champion earlier this year, but, more importantly, it’s also entirely self-taught. DeepMind says this means the company is one step closer to creating general purpose algorithms that can intelligently tackle some of the hardest problems in science, from designing new drugs to more accurately modeling the effects of climate change.

The original AlphaGo demonstrated superhuman Go-playing ability, but needed the expertise of human players to get there. Namely, it used a dataset of more than 100,000 Go games as a starting point for its own knowledge. AlphaGo Zero, by comparison, has only been programmed with the basic rules of Go. Everything else it learned from scratch. As described in a paper published in Nature today, Zero developed its Go skills by competing against itself. It started with random moves on the board, but every time it won, Zero updated its own system, and played itself again. And again. Millions of times over.

After three days of self-play, Zero was strong enough to defeat the version of itself that beat 18-time world champion Lee Se-dol, winning handily — 100 games to nil. After 40 days, it had a 90 percent win rate against the most advanced version of the original AlphaGo software. DeepMind says this makes it arguably the strongest Go player in history.

“By not using human data — by not using human expertise in any fashion — we’ve actually removed the constraints of human knowledge,” said AlphaGo Zero’s lead programmer, David Silver, at a press conference. “It’s therefore able to create knowledge itself from first principles; from a blank slate [...] This enables it to be much more powerful than previous versions.”

Silver explained that as Zero played itself, it rediscovered Go strategies developed by humans over millennia. “It started off playing very naively like a human beginner, [but] over time it played games which were hard to differentiate from human professionals,” he said. The program hit upon a number of well-known patterns and variations during self-play, before developing never-before-seen stratagems. “It found these human moves, it tried them, then ultimately it found something it prefers,” he said. As with earlier versions of AlphaGo, DeepMind hopes Zero will act as an inspiration to professional human players, suggesting new moves and stratagems for them to incorporate into their game.

As well as being a better player, Zero has other important advantages compared to earlier versions. First, it needs much less computing power, running on just four TPUs (specialized AI processors built by Google), while earlier versions used 48. This, says Silver, allows for a more flexible system that can be improved with less hassle, “which, at the end of the day, is what really matters if we want to make progress.” And second, because Zero is self-taught, it shows that we can develop cutting-edge algorithms without depending on stacks of data.

For experts in the field, these developments are a big part of what makes this new research exciting. That’s is because they offer a rebuttal to a persistent criticism of contemporary AI: that much of its recent gains come mostly from cheap computing power and massive datasets. Skeptics in the field like pioneer Geoffrey Hinton suggest that machine learning is a bit of a one-trick pony. Piling on data and compute is helping deliver new functions, but the current pace of advances is unsustainable. DeepMind’s latest research offers something of a rebuttal by demonstrating that there are major improvements to be made simply by focusing on algorithms.

“This work shows that a combination of existing techniques can go somewhat further than most people in the field have thought, even though the techniques themselves are not fundamentally new,” Ilya Sutskever, a research director at the Elon Musk-backed OpenAI institute, told The Verge. “But ultimately, what matters is that researchers keep advancing the field, and it's less important if this goal is achieved by developing radically new techniques, or by applying existing techniques in clever and unexpected ways.”

An earlier version of AlphGo made headlines when it beat Go champion Lee Se-dol in 2016.  That version learned from humans how to play.  Photo: Google/Getty Images.
In the case of AlphaGo Zero, what is particularly clever is the removal of any need for human expertise in the system. Satinder Singh, a computer science professor who wrote an accompanying article on DeepMind’s research in Nature, praises the company’s work as “elegant,” and singles out these aspects.

Singh tells The Verge that it’s a significant win for the field of reinforcement learning — a branch of AI in which programs learn by obtaining rewards for reaching certain goals, but are offered no guidance on how to get there. This is a less mature field of work than supervised learning (where programs are fed labeled data and learn from that), but it has potentially greater rewards. After all, the more a machine can teach itself without human guidance, the better, says Singh.

“Over the past five, six years, reinforcement learning has emerged from academia to have much more broader impact in the wider world, and DeepMind can take some of the credit for that,” says Singh. “The fact that they were able to build a better Go player here with an order of magnitude less data, computation, and time, using just straight reinforcement learning — it’s a pretty big achievement. And because reinforcement learning is such a big slice of AI, it’s a big step forward in general.”

What are the applications for these sorts of algorithms? According to DeepMind co-founder Demis Hassabis, they can provide society with something akin to a thinking engine for scientific research. “A lot of the AlphaGo team are now moving onto other projects to try and apply this technology to other domains,” said Hassabis at a press conference.

Hassabis explains that you can think of AlphaGo as essentially a very good machine for searching through complicated data. In the case of Zero, that data is comprised of possible moves in a game of Go. But because Zero was not programmed to understand Go specifically, it could be reprogrammed to discover information in other fields: drug discovery, protein folding, quantum chemistry, particle physics, and material design.

Hassabis suggests that a descendant of AlphaGo Zero could be used to search for a room temperature superconductor — a hypothetical substance that allows electrical current to flow with zero lost energy, allowing for incredibly efficient power systems. (Superconductors exist, but they only currently work at extremely cold temperatures.) As it did with Go, the algorithm would start by combining different inputs (in this case, the atomic composition of various materials and their associated qualities) until it discovered something humans had missed.

“Maybe there is a room temperature superconductor out and about. I used to dream about that when I was a kid, looking through my physics books,” says Hassabais. “But there’s just so many combinations of materials, it’s hard to know whether [such a thing exists].”

Of course, this would be much more complicated than simply pointing AlphaGo Zero at the Wikipedia page for chemistry and physics and saying “have at it.” Despite its complexity, Go, like all board games, is relatively easy for computers to understand. The rules are finite, there’s no element of luck, no hidden information, and — most importantly — researchers have access to a perfect simulation of the game. This means an AI can run millions of tests and be sure it’s not missing anything. Finding other fields that meet these criteria limits the applicability of Zero’s intelligence. DeepMind hasn’t created a magical thinking machine.

These caveats aside, the research published today does get DeepMind just a little bit closer to solving the first half of its tongue-in-cheek, two-part mission statement. Part one: solve intelligence; part two: use it to make the world a better place. “We’re trying to build general purpose algorithms and this is just one step towards that, but it’s an exciting step,” says Hassabis.

Tuesday, October 17, 2017

Google’s Machine Learning Software Has Learned to Replicate Itself

As reported by Futurism: Back in May, Google revealed its AutoML project; artificial intelligence (AI) designed to help them create other AIs. Now, Google has announced that AutoML has beaten the human AI engineers at their own game by building machine-learning software that’s more efficient and powerful than the best human-designed systems.

An AutoML system recently broke a record for categorizing images by their content, scoring 82 percent. While that’s a relatively simple task, AutoML also beat the human-built system at a more complex task integral to autonomous robots and augmented reality: marking the location of multiple objects in an image. For that task, AutoML scored 43 percent versus the human-built system’s 39 percent.

These results are meaningful because even at Google, few people have the requisite expertise to build next generation AI systems. It takes a rarified skill set to automate this area, but once it is achieved, it will change the industry. “Today these are handcrafted by machine learning scientists and literally only a few thousands of scientists around the world can do this,” WIRED reports Google CEO Sundar Pichai said. “We want to enable hundreds of thousands of developers to be able to do it.”
Much of metalearning is about imitating human neural networks and trying to feed more and more data through those networks. This isn’t — to use an old saw — rocket science. Rather, it’s a lot of plug and chug work that machines are actually well-suited to do once they’ve been trained. The hard part is imitating the brain structure in the first place, and at scales appropriate to take on more complex problems.

The Future of Machine-Built AI
It’s still easier to adjust an existing system to meet new needs than it is to design a neural network from the ground up. However, this research seems to suggest this is a temporary state of affairs. As it becomes easier for AIs to design new systems with increased complexity, it will be important for humans to play a gatekeeping role. AI systems can easily make biased connections accidentally — such as associating ethnic and gendered identities with negative stereotypes. However, if human engineers are spending less time on the grunt work involved in creating the systems, they’ll have more time to devote to oversight and refinement.

Ultimately, Google is aiming to hone AutoML until it can function well enough for programmers to use it for practical applications. If they succeed in this, AutoML is likely to have an impact far beyond the walls of Google. WIRED reports Pichai stated, at the same event from last week, that “We want to democratize this,” — meaning, the company hopes to make AutoML available outside Google.

Monday, October 16, 2017

Dubai Police will Ride Hoverbikes Straight out of 'Star Wars'

As reported by Mashable: Dubai is aggressively turning itself into a "Future City," putting self-flying taxis in the skies and a facial recognition system in its airport. The Dubai police department's latest ride is now adding another sci-fi transportation staple: the hoverbike.
The Dubai police, which already has luxury patrol carsself-driving pursuit drones, and a robot officer, just announced it will soon have officers buzzing around on hoverbikes, which look like an early version of the speeder bikes used by the scout troopers on Endor in Return of the Jedi. 

The force (see what I did there?) unveiled its new Hoversurf Scorpion craft at the Gitex Technology Week conference, according to UAE English language publication Gulf News. The police force will use the hoverbike for emergency response scenarios, giving officers the ability to zoom over congested traffic conditions by taking to the air. 

The Russian-made craft is fully electric and can handle loads of up to 600 pounds, offering about 25 minutes of use per charge with a top speed of about 43 mph. The Scorpion can also fly autonomously for almost four miles at a time for other emergencies.

The Russian-made craft is fully electric and can handle loads of up to 600 pounds, offering about 25 minutes of use per charge with a top speed of about 43 mph. The Scorpion can also fly autonomously for almost four miles at a time for other emergencies.

Hoversurf CEO Alexander Atamanov announced on Facebook the company and the Dubai police have signed a memorandum of understanding that will allow Hoversurf to begin mass production of the Scorpion crafts in the Dubai area to serve the department. 

The police hoverbikes join the Dubai fire department's use of "jet packs" to fight fires, providing sci-fi solutions to real-life emergencies. Let's just hope the Dubai police are better pilots than the Empire's scout troopers. 

Elon Musk Unveils Tesla Factory Model3 Video to Showcase Full Automation

Tesla CEO Elon Musk has released another video showing the autonomous robots
of the Model 3 production line hard at work.  Musk is likely hoping to prove the
vehicle is not being assembled by hand.
As reported by Futurism: Elon Musk took to Instagram to post another video of Tesla’s team of KUKA industrial robots at work building the Model 3, to showcase Tesla’s focus on automation in its development of electric vehicles. Earlier this week Musk sent out a video of Tesla’s Model 3 assembly line slowed down to 1/10th speed.

Robots Under Pressure
The new video, described as, “Stamping Model 3 body panels (real-time)” shows the automated process of fabricating the electric vehicle’s body at full speed. This comes amid numerous advances and controversies surrounding the automation of production. A few days ago, Jane Kim of the San Francisco Board of Supervisors established a committee dubbed the Jobs of the Future Fund, to explore how best to smooth the transition toward more automation.

Skeptics of an automated future like World Bank Chief Jim Yong Kim warn that humans are in for a job disruption not seen since the industrial revolution, and that we’d best invest in education and health. Kim argues that intelligent automation and reactionary political elements may threaten economic development (e.g., the resistance to forces of globalization)—putting the world, Kim adds, on a “crash course.”

But billionaire entrepreneur and Virgin Group founder Richard Branson thinks we have meaningful alternatives, telling the BI Nordic reporters that a safety net provided by a basic income could help counter the effects of artificial intelligence and increased automation.

Why Elon Musk is Sharing
Musk is likely sending out these videos in response to claims that the Model 3 is largely being built by hand, a claim which Tesla has stated to be “fundamentally wrong and misleading.” The company has been unable to keep pace with production level announced by Musk just this past summer.

Another statement from Tesla said, “We are simply working through the S-curve of production that we drew out for the world to see at our launch event in July. There’s a reason it’s called production hell.”

We can expect more videos of Musk proving the Model 3 is in the hands of an autonomous, streamlined production line. Hopefully, soon we can see the results of the process finally translate into speedy production. The company has so far had difficulty meeting the high demand of the new model.

Saturday, October 14, 2017

Elon Musk’s Rocket Could Get You Anywhere on Earth in 60 Min. Here’s What It Would Feel Like.

When Elon Musk announced the new BFR, he also showed an "unexpected" use
for it.  More than just sending people to Mars, the redesigned rocket and spacecraft
 could also be used to ferry people between the world's major cities
 in less than 60 minutes.
As reported by Futurism: To get from one city to another in just 30 to 60 minutes—who doesn’t want that? SpaceX founder and CEO Elon Musk definitely wants it, and that’s one of the potential uses for his redesigned BFR: earth to earth flights between major cities.

Musk previewed the latest BFR update at the 2017 International Astronautical Congress in September. Designed to be Musk’s new rocket and spacecraft for Mars, the BFR could also be a suborbital spacecraft for SpaceX, said former astronaut Leroy Chiao. Essentially, suborbital spacecrafts—like Virgin Galactic’s VSS Unity and Blue Origin’s New Shepard—are meant for the budding space tourism industry, and could function something like extremely high-tech, high-flying airplanes. Flying at the BFR’s 4.6 miles per second, you could get from New York to Los Angeles in just 25 minutes.

But just how would riding such a spacecraft feel? Chiao, who’s flown aboard three NASA space shuttles and a Russian Soyuz, described it to Business Insider“[L]aunch, insertion and entry would be similar to a capsule spacecraft [like the Soyuz], with the difference being in the final phase of landing,” he said.

Can You Handle It?
Chiao suggests that flying aboard a BFR won’t exactly be easy. “During launch on a rocket with liquid engines […] the liftoff is very smooth and one really can’t feel it,” he described. “Ignition of the next stage engine(s) causes a momentary bump in g-force. As you get to the last part of ascent, you feel some g’s come on through your chest, but it is not uncomfortable.”

Astronaut Leroy Chiao on the way to the launch of ISS Expedition 10. (Image credit: NASA)
The crucial moment is when the BFR’s rocket engines separate from the spacecraft, when passengers would feel “instantly weightless.” Here’s how he describes it:
"You feel like you are tumbling, as your balance system struggles to make sense of what is happening, and you are very dizzy. You feel the fluid shift [in your body], kind of like laying heads-down on an incline, because there is no longer gravity pulling your body fluids down into your legs. All this can cause nausea. As you start to re-enter the atmosphere, you would feel the g’s come on smoothly and start to build."
Then, finally the BFR lands. “[Y]ou would both feel and hear [the engines],” Chiao said. “As the thrust builds, you would feel the g’s come on again and then at touchdown, you would feel a little bump.”

If you think you can handle it, then maybe the BFR’s Earth-to-Earth travel is for you. “[T]his would not be for the faint of heart, and it is difficult to see how this would be inexpensive,” he said. Keep in mind, however, that there’s still a lot SpaceX and Musk have to figure out before this actually works. “But the one thing I’ve learned from observing Elon, is not to count him out,” Chiao added.

Friday, October 13, 2017

Toyota’s Fuel-Cell Big Rigs are Ready to Haul Cargo

As reported by Engadget: After completing 4,000 "development" miles at the port of Los Angeles, Toyota's Project Portal hydrogen fuel-cell big rig is ready to start transporting cargo from that port and the one in Long Beach to rail yards and warehouses beginning on October 23.

The class 8 Toyota truck (Cummins and Tesla proposed EV trucks are only class 7's) is capable of producing more than 670 horsepower with 1,325 pounds of torque -- more than enough for even the heaviest Amazon delivery. The semi began its testing at the ports back in April, with Toyota partnering with drayage (transporting goods over short distances) provider Southern Counties Express. As the trial progressed, more and more cargo had been added until the two companies decided the truck was ready to become part of the proper fleet of vehicles later this month.

Powering the truck are two fuel stacks from Toyota's fuel-cell Mirai sedan and a 12kWh battery. The automaker says the big rig is capable of transporting 80,000 pounds and has a range of about 200 miles per fill-up. That's more than enough to move cargo around the Los Angeles area. Plus, it can quickly be put back on the road thanks to the fact that hydrogen fuel-cell vehicles can be refuelled as quickly as traditional gas-powered car.

While automakers have been touting their long-term electric vehicle plans, many of them have been simultaneously working on fuel-cell vehicles as a way to hedge their bets. A hydrogen fuel-cell vehicle can refuel as quickly as a gasoline vehicle, but like an EV, produces no CO2. It seems like it would be a seamless transition from traditional driving, or at least more so than what's expected from electric vehicles, which need to be plugged in and charged for hours to fulfill their range promises.

At issue is the lack of a robust hydrogen fuel-cell refueling infrastructure. Toyota and other automakers have worked closely with third parties to set up stations in Los Angeles, San Francisco and the north east. Anywhere else, and you're basically out of luck. But if programs like Toyota's Project Portal prove to be a hit, it might be just the boost the fuel-cell infrastructure needs for mass adoption.