Search This Blog

Monday, June 20, 2016

1,000-Core “Kilo-Core” Processor Built at UC Davis

As reported by SlashGear: When MediaTek announced its deca-core moble processor, it almost seemed insane in a world that's very much settled on octa-cores. The chip maker, however, has nothing on the silicon produced by researchers at the Department of Electrical and Computer Engineering at the University of California, Davis. Although it definitely won't fit inside a smartphone, tablet, or even a laptop for that matter, the chip boasts of being the world's first kilo-core processor. That's 1,000 processing cores at your service, making even the beefiest gaming rig cry in shame.

Of course, you probably won't be using it for that gaming, or any other consumer purpose. It's still something that exists inside controlled conditions of a laboratory, but it is nonetheless an achievement worth bragging about. According to electrical and computer engineering professor Bevan Baas, the highest number of cores ever achieved in a multi-core chip has been 300. This UC Davies chip is easily more than thrice that many.

That's not its only bragging right either. Each processor is like an island of its own and can run a tiny program independently of others. This would be akin to a "Multiple Instruction, Multiple Data" architecture that is more flexible than current Single Instruction, Multiple Data (SIMD) used by most modern commercial processors these days.

And there's more to it than that. Almost like the "True Octa Core" feature MediaTek flaunted a few years ago, each processor can power itself down when not in use, so you aren't exactly going to be using 1,000 times the power. In fact, the chip can be powered by a AA battery.

In terms of specs, the cores operate at 1.78 GHz and has been clocked to process 1.78 trillion instructions per second. A special feature of the chip is that the cores can send and receive data directly to each other instead of having a common memory pool, like an L-level cache, which would have been a bottleneck instead of a speed increase in this situation. The chip itself was fabricated by IBM using a much older 32nm process. As for what the chip can be used for, it could, if it ever becomes mass produced and stable, be a favorite among media processing, scientific, and encryption circles. Basically anything that requires processing tons of data in parallel and in break neck speeds.

Can you imagine combining this technology with Google's Tensor processors, or with deeply trained neural network systems?


Thursday, June 16, 2016

A Rocket Launch Brings China One Step Closer to Its Own GPS

As reported by Wired: ON SUNDAY MORNING, the Chinese government launched the 23rd satellite in its BeiDou Navigation Satellite System—the Chinese equivalent of GPS—into orbit aboard a Long March-3C rocket. BeiDou has worked for a while on a regional level, but China has been racking up the launches recently. Each one is another step toward BeiDou having fully operational global coverage, something that only the United States and (kinda, sometimes, maybe) Russia have today. If it works, it could mean a new golden age of navigation. Unless it leads to global war.

BeiDou is already a a Regional Navigational Satellite System; India and Japan are working on their own regional systems, too. Completing the Chinese constellation would turn it into a Global Navigational Satellite System, joining the US (the familiar GPS), Russia (GLONASS), and the European Union (Galileo). Though each places satellites in slightly different orbits and at different altitudes, they all work on the same idea, providing global coverage with enough signal to allow devices on Earth to triangulate a precise location. GPS is accurate to within a meter.

The more satellites you have, the more precise and accurate the system. And you need a reliable satellite constellation because so much modern technology is location-enabled and dependent. GPS and its siblings are how airplanes and freighters navigate, how maps stay accurate on the move, how cell phones work. The modern global economy only works if it knows where it is in all three spatial dimensions.

A more philosophical dimension matters even more. “If you want acceptance, a system has to have more than precision and accuracy, it has to have integrity,” says Brad Parkinson, a Stanford engineer and one of the inventors of GPS. “It has to operate within spec, and have some system of monitoring and publishing when it isn’t.” If a GPS satellite goes berserk, the FAA’s Wide Area Augmentation System sends out an alarm within six seconds. WAAS, or something like it, could just as easily monitor Galileo, GLONASS, and even BeiDou, and then, technically, anyone could use any and all of the various networks. “If it’s there, and it’s working, why not use it?” says Parkinson. “Almost all modern smartphone receivers support GPS and GLONASS already.”

Actually, the US and China have been working towards GPS-BeiDou interoperability for years in fields like aviation. “If you can land a plane in pea soup fog conditions, that’s a pretty great thing,” says Tom Langenstein, Executive Director of the Stanford Center for Position, Navigation and Time. “China would like to be able to do that too. It’s kind of a nice area of cooperation between our countries.”

Granted, BeiDou hasn’t been totally cooperative and transparent. China launched several satellites before telling the engineering community what their signal structure was—somewhat pointlessly, considering Stanford researchers were able to figure it out in about day. But as Langenstein points out, if they fail to provide evidence of their accuracy and integrity, then their satellites simply won’t be used. GLONASS has had a lot of trouble keeping their satellites in working order, and is notably cagey about system failures, which to Parkinson’s mind keeps them in a vicious cycle of limited viability. So it’s likely that, if BeiDou is to succeed, it’s by being welcomed into the international GNSS club.

Ultimately, international use of BeiDou satellites is in China’s own interest. “GPS has been a major boon for the US economy for the last twenty years,” Langenstein says. “China wants some of that. If you want to fear that, you can. But China is the second largest economy in the world and getting larger. It would be far better to cooperate and work with them than try to find some way to fight them.”

It’s that fighting part that could make BeiDou more scary than useful. “For the last several decades, satellites have been one of the signature elements of the US projecting as the sole remaining superpower. We can blow up anyone who looks at us cross-eyed,” says John Pike, a prominent military analyst and director of GlobalSecurity.org. “This suggests that China has global ambitions. They’ve got superpower-style space systems, but they don’t have the military to go with it.” The US and China are already frenemies at best; a significant military advantage for the Chinese could jeopardize the relationship further.

On the other hand, an optimist might point out the opportunities here. “BeiDou would change the asymmetry of military power,” Parkinson says. “But I’ve been saying for years that our ground soldiers should have sets that pick up US, Russian, Chinese, and European signals, and a very rapid technique of letting that ground soldier know when not to use it—a military analogue of WAAS. You wouldn’t be relying on foreign systems, but they’d enhance your mission when you know that they’re working properly.” BeiDou’s ultimate direction might not be clear yet, but it’s definitely headed there—fast.

Wednesday, June 15, 2016

The AI Dashcam App That Wants to Rate Every Driver in the World

As reported by IEEE SpectrumIf you’ve been out on the streets of Silicon Valley or New York City in the past nine months, there’s a good chance that your bad driving habits have already been profiled by Nexar. This U.S.-Israeli startup is aiming to build what it calls “an air traffic control system” for driving, and has just raised an extra $10.5 million in venture capital financing.
Since Nexar launched its dashcam app last year, smartphones running it have captured, analyzed, and recorded over 5 million miles of driving in San Francisco, New York, and Tel Aviv. The company’s algorithms have now automatically profiled the driving behavior of over 7 million cars, including more than 45 percent of all registered vehicles in the Bay Area, and over 30 percent of those in Manhattan.
Using the smartphone’s camera, machine vision, and AI algorithms, Nexar recognizes the license plates of the vehicles around it, and tracks their location, velocity, and trajectory. If a car speeds past or performs an illegal maneuver like running a red light, that information is added to a profile in Nexar’s online database. When another Nexar user’s phone later detects the same vehicle, it can flash up a warning to give it a wide berth. (This feature will go live later this year.)
Lior Strahilevitz, a law professor at the University of Chicago, proposed a similar (if lower-tech) reputation system for drivers a decade ago. “I think it’s a creative and sensible way to help improve the driving experience,” he says. “There aren’t a lot of legal impediments in the United States to what Nexar is doing, nor should there be.” Eran Shir, Nexar’s co-founder, says, “If you’re driving next to me and you’re a dangerous driver, I want to know about it so I can be prepared.”
Nexar estimates that if 1 percent of drivers use the app daily, it would take just one month to profile 99 percent of a city’s vehicles. “We think that it’s a service to the community to know if you’re a crazy driver or not,” says Shir.
That community includes insurance companies, who Nexar suggests could save billions by cherry-picking only the best drivers to cover. Nexar has calculated that companies using its universal driving score could save $125 a year on each policy. Drivers benefit, too, from video and sensor footage stored in the cloud that they can use to support their side of the story following a collision.

Shir hopes that Nexar will also reduce traffic fatalities long before self-driving cars become mainstream. The app can highlight treacherous intersections, or detect a car braking sharply and send alerts to users several cars back or even around a corner. “This needs to be a real-time network,” says Shir. “We’ve optimized the way that cars communicate so that the latency is very low: about 100 to 150 milliseconds.”
Such targeted warnings require much more precise geolocation than that offered by normal GPS systems, which are typically accurate to within only 5 to 50 meters. Nexar’s app fuses data from multiple sensors in the smartphone. The accelerometer senses potholes and speed bumps, while the magnetometer (used for compass settings) detects when the car is travelling under power lines. “We use these, refreshed fifty times a second, to crowdsource features of the road and pinpoint where you are to within 2 meters,” says Shir. A side benefit is that the company has built detailed maps of road surface quality in its pilot cities.
Shir thinks that Nexar can also help drivers realize the vision of smart, connected highways. “We’re going into a hybrid world where autonomous vehicles and humans will share the road,” says Shir. “We won’t be able to shout at each other or ask someone to move. We need a network that will manage our roads as a scarce resource.”
For the past decade, the automotive industry has been struggling to implement dedicated short range communications (DSRC), a messaging system that lets a car transmit its location, speed, and direction to nearby vehicles and infrastructure. Shir thinks that apps like Nexar could leapfrog the billions of dollars and decades of roll-out time that such a system would likely demand.
“DSRC is dead in the water,” he says. “Instead of sharing information about a single vehicle, where you need a density [of equipped vehicles] of 10 to 20 percent to become effective, you can share the information of all the vehicles around you, and start with 1 percent. It’s a massive force multiplier.”
Over the next year, Nexar plans to launch its network features in 10 more cities, including San Diego; Washington, D.C.; Chicago; and Seattle. It will work towards that that magic 1-percent penetration mark where it could rate almost every driver and detect almost every incident.Although ranking the driving performance of every vehicle in the United States might sounds legally dubious, Lior Strahilevitz says that it is probably legal: “Courts generally say that people generally have little or no expectation of privacy in the movements of their cars on public roads, as long as cars aren’t being tracked everywhere they go for a lengthy period of time.”
Nevertheless, Nexar will face some ethical dilemmas. For example, should the app inform users when it spots a license plate that’s the subject of an Amber Alert? Or contact law enforcement directly if the algorithms suggest that an erratically moving car is being operated by an intoxicated driver?
Although Shir says that Nexar is “not interested in generating more traffic ticket revenue for cities… or becoming the long arm of the FBI,” he admits that law enforcement could subpoena its raw footage and sensor data.
Ultimately, Nexar might succeed because drivers are constantly being rated, whether or not they are running the app themselves. If its algorithms are judging you anyway, you might not want to be the only one in the dark about that accident-prone pick-up in the next lane.

Apple Is Bringing the AI Revolution to Your iPhone

As reported by Wired: YOUR NEXT IPHONE will be even better at guessing what you want to type before you type it. Or so say the technologists at Apple

Let’s say you use the word “play” in a text message. In the latest version of the iOS mobile operating system, “we can tell the difference between the Orioles who are playing in the playoffs and the children who are playing in the park, automatically,” Apple senior vice president Craig Federighi said Monday morning during his keynote at the company’s annual Worldwide Developer Conference.

Like a lot of big tech companies, Apple is deploying deep neural networks, networks of hardware and software that can learn by analyzing vast amounts of data. Specifically, Apple uses “long short-term memory” neural networks, or LSTMs. They can “remember” the beginning of a conversation as they’re reading the end of it, making them better at grasping context.

Google uses a similar method to drive Smart Reply, which suggests responses to email messages. But Apple’s “QuickType”—that’s what the company calls its version—shows that not only is Apple pushing AI onto personal devices, it’s pushing harder than Federighi let on.

Today, on its website, Apple also introduced an application programming interface, or API, that lets outside businesses and coders use a similar breed of neural network. This tool, Basic Neural Network Subroutines, is a “collection of functions that you can use to construct neural networks” on a wide range of Apple operating systems, including iOS as well as OS X (for desktops and laptops), tvOS (for TVs), and watchOS (for watches), according to the documentation. “They’re making it as easy as possible for people to add neural nets to their apps,” says Chris Nicholson, CEO and founder of deep learning startup Skymind.

For now, BNNS looks better at identifying images than understanding natural language. But either way, neural networks don’t typically run on laptops and phones. They run atop computer servers on the other side of the Internet, and then they deliver their results to devices across the wire. (Google just revealed that it has built a specialized chip that executes neural nets inside its data centers before sending the results to your phone). Apple wants coders to build neural nets that work even without a connection back to the ‘net—and that’s unusual. Both Google and IBM have experimented with the idea, but Apple is doing it now.
It might not work. Apple doesn’t provide a way of training the neural net, where it actually learns a task by analyzing data. The new Apple API is just a way of executing the neural net once it’s trained. Coders, Nicholson says, will have to handle that on their own or use pre-trained models from some other source. Plus, no one yet knows how well Apple’s neural nets will run on a tiny device like a phone or a watch. They may need more processing power and battery life than such devices can provide. But that’s all details; one day, neural nets will work on personal devices, and Apple is moving toward that day.

Bummer: SpaceX’s Landing Streak Comes to an End

As reported by The VergeA SpaceX Falcon 9 rocket successfully launched two satellites into orbit this morning, but the company failed to land the vehicle on a floating drone ship at sea afterward.
THE VEHICLE'S LANDING CAUSED A BIT OF DRAMA
The vehicle's landing caused a bit of drama, since SpaceX wasn't sure at first if the vehicle actually made it down in one piece. Once the rocket landed, it shook the drone ship pretty violently, causing the ship's onboard camera to freeze. The last shots of the vehicle before the camera cut out showed the Falcon 9 standing upright on the ship, but there were also some flames around the bottom.
Afterward, a SpaceX employee announced on the company's webcast that the vehicle was indeed lost. "We can say that Falcon 9 was lost in this attempt," said Kate Tice, a process improvement engineer for SpaceX. Later CEO Elon Musk confirmed that the Falcon 9 suffered an RUD, or a rapid unscheduled disassembly. That's Musk speak for an explosion.

Ascent phase & satellites look good, but booster rocket had a RUD on droneship

Later, Musk said that the problem had to do with low thrust in the one of the rocket's three main engines, and that all the engines need to be operating at full capacity to handle this type of landing. He noted that the company is already working on upgrades to the Falcon 9 so that it can handle this type of "thrust shortfall" in the future.
The failure puts an end to SpaceX’s recent landing streak. The company has pulled off successful landings after its past three launches, all of which touched down on the drone ship. So far the company has landed four Falcon 9s in total — three at sea and one on solid ground.
SpaceX will have many more chances to land its rockets again soon. The company will launch a cargo resupply mission to the International Space Station for NASA on July 16th. After that launch, SpaceX will try to land the Falcon 9 on solid ground at Cape Canaveral, Florida — something it hasn’t attempted since its first rocket landing in December. And after that, SpaceX has another satellite launch slated for August.
Meanwhile, the company still has an impressive stockpile of landed rockets in its possession. Right now, SpaceX is keeping its four recovered rockets in a hangar at Launch Complex 39A, a launch site at Kennedy Space Center in Florida that the company leases from NASA. That hangar can only store five Falcon 9 rockets at a time, though. So whenever SpaceX does land its next rocket in Florida, the building will be at full capacity.

Tuesday, June 14, 2016

Elon Musk: People Will Probably Die on the First SpaceX Missions to Mars

As reported by IBTimes: Technology entrepreneur Elon Musk is really excited about getting the first humans to land on Mars in 2025 with the view to establishing a colony, but in case you didn't realize this already, he is warning that pioneering a new planet probably won't be much fun.

"It's dangerous and probably people will die – and they'll know that. And then they'll pave the way, and ultimately it will be very safe to go to Mars, and it will be very comfortable. But that will be many years in the future," Musk told the Washington Post in a new interview detailing how the Mission to Mars technical journey is likely to evolve.

Musk's space transportation company SpaceX currently has a $1.6bn contract with Nasa to routinely ferry cargo to and from the International Space Station (ISS). In November 2015, SpaceX received official approval from NASA to send astronauts from the US space agency to the ISS starting from 2017, as currently the only way into space is via Russia.

SpaceX plans to start flying unmanned spacecraft to Mars from 2018 that are timed to occur every two years when Earth and the Red Planet are closest in orbit. The purpose of these missions will be to gather valuable data about descending and landing on Mars for human missions in the future.

There is currently a great deal of interest in the Mission to Mars and organisations like Dutch-based Mars One have galvanized the general public to apply to be the first humans on Mars. The likelihood of this being possible, however, without backing from NASA and the European Space Agency (ESA) is really slim, and some think that Mars One could just be a big scam.

"Essentially what we're saying is we're establishing a cargo route to Mars. It's a regular cargo route. You can count on it. It's going to happen every 26 months. Like a train leaving the station," he said.

"And if scientists around the world know that they can count on that, and it's going to be inexpensive, relatively speaking compared to anything in the past, then they will plan accordingly and come up with a lot of great experiments."

If these autonomous spacecraft flights are successful and are proven to be safe enough for humans, then the first human mission will take place in 2025. However, even when the two planets are at their closest, they are still separated by a distance of 140 million miles and it will take months for the spacecraft to reach Mars.

For the first pioneering humans who decide to leave their lives on Earth behind, Musk admits the journey will likely be "hard, risky, dangerous, difficult" but he points out it is no different to the British who chose to travel across the sea to colonize the Americas in the 1600s.

"Just as with the establishment of the English colonies, there are people who love that. They want to be the pioneers," he said.

Friday, June 10, 2016

V2X - Qualcomm’s Connected Car Reference Platform Aims to Connect Smart Cars to Everything

As reported by NetworkWorld: With 200 to 300 microcontrollers and microprocessors in the typical automobile, cars are already pretty smart. And Google’s and Tesla’s continued development, as well as auto manufacturers’ R&D investments in preparation of autonomous cars, indicate cars are about to get much smarter.

That increased intelligence means vehicles will have more silicon devices that are more integrated, with more densely packed circuitry. Functional modules, such as control systems, infotainment, and autonomous steering and braking, multiply the number of chips per car that semiconductor manufacturers can sell into each car.

To fill the gap between the connectivity capabilities of today’s cars and the complex connectivity in next-generation cars, Qualcomm today announced its Connected Car Reference Platform intended for the car industry to use to build prototypes of the next-generation connected car. Every category from economy to luxury car will be much smarter than the connected luxury car of today, creating a big opportunity for Qualcomm to supply semiconductors to automakers and suppliers.

Connected cars require faster, more-complex connectivity

Connectivity becomes more complex as infotainment experiences become richer and cars become semi-autonomous cars like the Tesla S or fully autonomous like Google’s vehicle. Frank Fitzek, chief of Germany’s 5G Lab, explained to me in February how autonomous cars will need ultra-low-latency, fast 5G network connectivity.

Connected car network speeds will have to get faster because consumer expectations for connectivity in the autonomous era will be the same in a car as at home. Passengers will connect mobile devices with one another and infotainment systems to collaboratively work, play games, cast streamed music and video to car stereos and displays, as well as communicate with the world beyond the car interior.

If this sounds futuristic, go rent or borrow a 2016 model luxury car from Audi, Honda, or Mercedes or a Tesla S and you will experience excellent connectivity and smartphone integration. Connectivity and options in the next generation will be substantially better.

Autonomous steering and collision avoidance features were not announced. Onboard specialized processors, in addition to the capabilities announced today, will be necessary for autonomous driving. It’s not difficult to imagine that Qualcomm will apply its machine learning SDK, announced just a few weeks ago, and the Snapdragon 820 processor to meet those needs.

Collision avoidance, though, requires a lot of communications with onboard car sensors and cameras—and with a local cloud of Wi-Fi and V2X. V2X, sometimes referred to as vehicle-to-everything, incorporates V2I (Vehicle to Infrastructure), V2V (Vehicle to Vehicle), V2P (Vehicle to Pedestrian), V2D (Vehicle to Device) and V2G (Vehicle to Grid). Much of the collision avoidance systems will operate using a local cloud, but safely coordinating cars in heavy traffic travelling at 70 mph or on the Autobahn at 120 mph will require ultra-low latency, fast 5G.

Features of the Connected Car Reference Platform

Qualcomm described the following features of the Connected Car Reference Platform in its release:
  • Scalability: Using a common framework that scales from a basic telematics control unit (TCU) up to a highly integrated wireless gateway, connecting multiple electronic control units (ECUs) within the car and supporting critical functions, such as over-the-air software upgrades and data collection and analytics.
  • Future-proofing: Allowing the vehicle’s connectivity hardware and software to be upgraded through its life cycle, providing automakers with a migration path from Dedicated Short Range Communications (DSRC) to hybrid/cellular V2X and from 4G LTE to 5G.
  • Wireless coexistence: Managing concurrent operation of multiple wireless technologies using the same spectrum frequencies, such as Wi-Fi, Bluetooth and Bluetooth Low Energy.
  • OEM and third-party applications support: Providing a secure framework for the development and execution of custom applications.
There are a few interesting points about those features. Qualcomm is attempting to solve a difficult problem for automakers: over-the-air software updates. Updating software on a mission-critical system such as an autonomous car is a much harder problem than updating a smartphone because it has to be completely secure and work every time without reducing safety. But Qualcomm has to solve this problem anyway to accelerate shipments not only to the car market but to the IoT market, where it hopes to sell tens of billions of chips.

Keeping up with connectivity improvements

One of the inconsistencies between building cars and building smartphones is the average car has a 12-year useful life, and a smartphone has just a couple of years. Smartphone connectivity improves with each design iteration, posing the problem that its network speeds will almost always be faster than what is installed in the car. Unless the car network is future-proofed, consumers will rely on their phone’s network rather than the car’s. Qualcomm said there will be a migration from older networks to newer, perhaps offering an upgrade to car network connectivity every two years to match the improvements in smartphones.

Qualcomm is approaching a unified communications system to address infotainment, navigation, autonomous steering and braking, and control systems connected to the control area network (CAN). Autonomous steering and braking, navigation, and control systems must be connected, but automakers have resisted combining the CAN bus with infotainment systems because it increases the attack surface that could be exploited by a criminal hacker. Qualcomm claims their design is secure, but it can expect to be asked by safety engineers to prove it.

Qualcomm says it expects to ship the Connected Car Reference Platform to automakers, tier 1 auto suppliers and developers late this year.