Search This Blog

Thursday, November 14, 2013

NASA: PhoneSat 2.4 is Ready for Launch

As reported by NASAFor the second time this year, NASA is preparing to send a smartphone-controlled small spacecraft into orbit. The PhoneSat 2.4 mission is demonstrating innovative new approaches for small spacecraft technologies of the future.
The NASA PhoneSat 2.4 is hitch-hiking a ride onboard an Orbital Minotaur I rocket slated for a November 19 liftoff from the Mid Atlantic Regional Spaceport at NASA's Wallops Flight Facility in Virginia. The primary payload on the booster is the U.S. Air Force Office of Responsive Space ORS-3 mission, which will validate launch and range improvements for NASA and the military.
PhoneSat 2.4 builds upon the successful flights of a trio of NASA smartphone satellites that were orbited together last April. That pioneering mission gauged use of consumer-grade smartphone technology as the main control electronics of a capable, yet very low-cost, satellite, reports Andrew Petro, program executive for small spacecraft technology at NASA Headquarters in Washington.
Each smartphone is housed in a standard cubesat structure, measuring roughly four inches square.
The soon-to-be lofted PhoneSat 2.4 has two-way radio communications capability, along with reaction wheels to provide attitude control, Petro says, and will be placed into a much higher orbit than its PhoneSat predecessors. Those were short-lived, operating for about a week in orbit.
Tabletop technology
“We’re taking PhoneSat to another step in terms of capability, along with seeing if the satellite continues to function for an extended period of time,” Petro explains.
The PhoneSat mission is a technology demonstration project developed through the agency’s Small Spacecraft Technology Program, part of NASA’s Space Technology Mission Directorate.
NASA PhoneSats take advantage of “off-the-shelf” consumer devices that already have many of the systems needed for a spacecraft, but are ultra-small, such as fast processors, multipurpose operating systems, sensors, GPS receivers, and high-resolution cameras.
“It’s tabletop technology,” Petro says. “The size of a PhoneSat makes a big difference. You don’t need a building, just a room. Everything you need to do becomes easier and more portable. The scale of things just makes everything, in many ways, easier. It really unleashes a lot of opportunity for innovation,” he says.
Consumer electronics market
There’s another interesting aspect to using the smartphone as a basic electronic package for PhoneSats.
“The technology of the consumer electronics market is going to continue to advance,” Petro notes. “NASA can pick up on those advances that are driven by the needs of the consumer.”
What’s the big deal about small satellites?
NASA is eyeing use of small, low-cost, powerful satellites for atmospheric or Earth science, communications, or other space-born applications.
For example, work is already underway on the Edison Demonstration of Smallsat Networks (EDSN) mission, says Petro. The EDSN effort consists of a loose formation of eight identical cubesats in orbit, each able to cross-link communicate with each other to perform space weather monitoring duties.
Magic dust
The three PhoneSats that were orbited earlier this year signaled “the first baby step,” says Bruce Yost, the program manager for NASA’s Small Spacecraft Technology Program at the Ames Research Center in Moffett Field, Calif.
“The PhoneSat 2.4 will be at a higher altitude and stay in space for a couple of years before reentering,” Yost adds. “So we’ll be able to start collecting data on the radiation effects on the satellite and see if we run into anything that causes problems.”
Yost says where the real “magic dust” of PhoneSats comes into play is how you program them. “That is, what applications can you run on them to make them useful. We’re adding more and more complexity into the PhoneSats.”
To that end, PhoneSats and the applications they are imbued with can lead to new ways to interact with and explore space, Yost observes. “You can approach problems in a more distributed fashion. So it’s an architectural shift, the concept of inexpensive but lots of small probes.”
NASA’s Petro sees another value in pushing forward on small satellites.
“It used to be that kids growing up wanted to be an astronaut. I think we might be seeing kids saying, what they want to do is build a spacecraft. The idea here is that they really can do that,” Petro says. “They can get together with a few other people to build and fly a spacecraft. Some students coming out of college as new hires have already built and flown a satellite…that’s a whole new notion, one that was not possible even 10 years ago,” he concludes.

Wednesday, November 13, 2013

Smartphones in 2017 will be Smart Enough to Automate and Manage Your Daily Life

Starting from certain basic tasks, smartphones are expected to
control greater aspects of our lives.
As reported by IBM TimesOver the next four years, smartphones will become so intelligent that they will be able to predict their owners' next move, their next purchase or even interpret their actions, according to a new Gartner report, which says that smartphones will be able to perform such tasks using what has been described as “the next step in personal cloud computing.”
“Smartphones are becoming smarter, and will be smarter than you by 2017,” Carolina Milanesi, research vice president at Gartner, said in a statement on Tuesday at Gartner Symposium/ITxpo 2013, which is taking place in Barcelona, Spain, from Nov. 10-14. “If there is heavy traffic, it will wake you up early for a meeting with your boss, or simply send an apology if it is a meeting with your colleague. The smartphone will gather contextual information from its calendar, its sensors, the user’s location and personal data.”
According to Milanesi, the transition from mobile phones to smartphones is attributed to two main factors -- technology and apps. While the former is responsible for features such as cameras, location-based intelligence and sensors, the latter has connected such features to an array of sophisticated functions that have improved users’ daily lives.
Initially, smartphones are expected to perform basic tasks, such as booking a car for its yearly service, creating a weekly to-do list, sending birthday greetings or responding to everyday emails. However, as consumers become more confident about their smartphone's ability to perform certain menial tasks, they are expected to begin allowing more apps and services to take control of other, more crucial, aspects of their lives, which according to analysts, “will be the era of cognizant computing.”
However, it maybe a while before smartphones will be ready to take over the planet from humans.
Milanesi said that smartphones will be smarter than what they are now not because of inherent intelligence, but because the data stored in the cloud will provide them with the computational ability to make sense of the information they have. Smartphones will have the potential to become consumers’ “secret digital agent,” but only if users are willing to provide them with the data.
According to Gartner, regulatory and privacy issues, and the level of comfort users will have in sharing their personal information, will differ considerably based on age groups and geographies. Here’s a figure, showing the four stages of cognizant computing:
The four stages of cognizant computing.
According to Gartner, cognizant computing will have significant impact on hardware vendors, and other services and business models, and over the next two to five years, it will become one of the strongest market forces affecting the entire technological ecosystem.
“Over the next five years, the data that is available about us, our likes and dislikes, our environment and relationships will be used by our devices to grow their relevance and ultimately improve our life,” Milanesi said.

All Can Be Lost: The Risk of Putting ALL of Our Knowledge in the Hands of Machines

We rely on computers to fly our planes, find our cancers, design
our buildings, audit our businesses.  That's all well and good.  But
what happens when the computer fails?
As reported by The AtlanticOn the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York. As is typical of commercial flights today, the pilots didn’t have all that much to do during the hour-long trip. The captain, Marvin Renslow, manned the controls briefly during takeoff, guiding the Bombardier Q400 turboprop into the air, then switched on the autopilot and let the software do the flying. He and his co-pilot, Rebecca Shaw, chatted—about their families, their careers, the personalities of air-traffic controllers—as the plane cruised uneventfully along its northwesterly route at 16,000 feet. The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity. Rather than preventing a stall, Renslow’s action caused one. The plane spun out of control, then plummeted. “We’re down,” the captain said, just before the Q400 slammed into a house in a Buffalo suburb.

The crash, which killed all 49 people on board as well as one person on the ground, should never have happened. A National Transportation Safety Board investigation concluded that the cause of the accident was pilot error. The captain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.” An executive from the company that operated the flight, the regional carrier Colgan Air, admitted that the pilots seemed to lack “situational awareness” as the emergency unfolded.

The Buffalo crash was not an isolated incident. An eerily similar disaster, with far more casualties, occurred a few months later. On the night of May 31, an Air France Airbus A330 took off from Rio de Janeiro, bound for Paris. The jumbo jet ran into a storm over the Atlantic about three hours after takeoff. Its air-speed sensors, coated with ice, began giving faulty readings, causing the autopilot to disengage. Bewildered, the pilot flying the plane, Pierre-Cédric Bonin, yanked back on the stick. The plane rose and a stall warning sounded, but he continued to pull back heedlessly. As the plane climbed sharply, it lost velocity. The airspeed sensors began working again, providing the crew with accurate numbers. Yet Bonin continued to slow the plane. The jet stalled and began to fall. If he had simply let go of the control, the A330 would likely have righted itself. But he didn’t. The plane dropped 35,000 feet in three minutes before hitting the ocean. All 228 passengers and crew members died.

Pilots today work inside what they call “glass cockpits.” The old analog dials and gauges are mostly gone.

They’ve been replaced by banks of digital displays. Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes. What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.

And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes, leading to what Jan Noyes, an ergonomics expert at Britain’s University of Bristol, terms “a de-skilling of the crew.” No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and a leading authority on automation. When an autopilot system fails, too many pilots, thrust abruptly into what has become a rare role, make mistakes. Rory Kay, a veteran United captain who has served as the top safety official of the Air Line Pilots Association, put the problem bluntly in a 2011 interview with the Associated Press: “We’re forgetting how to fly.” The Federal Aviation Administration has become so concerned that in January it issued a “safety alert” to airlines, urging them to get their pilots to do more manual flying. An overreliance on automation, the agency warned, could put planes and passengers at risk.

Doctors use computers to make diagnoses and to perform surgery. Wall Street bankers use them to assemble and trade financial instruments. Architects use them to design buildings. Attorneys use them in document discovery. And it’s not only professional work that’s being computerized. Thanks to smartphones and other small, affordable computers, we depend on software to carry out many of our everyday routines. We launch apps to aid us in shopping, cooking, socializing, even raising our kids. We follow turn-by-turn GPS instructions. We seek advice from recommendation engines on what to watch, read, and listen to. We call on Google, or Siri, to answer our questions and solve our problems. More and more, at work and at leisure, we’re living our lives inside glass cockpits.

Psychologists have found that when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift. We become disengaged from our work, and our awareness of what’s going on around us fades. Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears. When a computer provides incorrect or insufficient data, we remain oblivious to the error.

What’s most astonishing, and unsettling, about computer automation is that it’s still in its early stages. Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge. They pointed to the example of driving a car, which requires not only the instantaneous interpretation of a welter of visual signals but also the ability to adapt seamlessly to unanticipated situations. “Executing a left turn across oncoming traffic,” two prominent economists wrote in 2004, “involves so many factors that it is hard to imagine the set of rules that can replicate a driver’s behavior.” Just six years later, in October 2010, Google announced that it had built a fleet of seven “self-driving cars,” which had already logged more than 140,000 miles on roads in California and Nevada.

Driverless cars provide a preview of how robots will be able to navigate and perform work in the physical world, taking over activities requiring environmental awareness, coordinated motion, and fluid decision making. Equally rapid progress is being made in automating cerebral tasks. Just a few years ago, the idea of a computer competing on a game show like Jeopardy would have seemed laughable, but in a celebrated match in 2011, the IBM supercomputer Watson trounced Jeopardy’s all-time champion, Ken Jennings. Watson doesn’t think the way people think; it has no understanding of what it’s doing or saying. Its advantage lies in the extraordinary speed of modern computer processors.

In Race Against the Machine, a 2011 e-book on the economic implications of computerization, the MIT researchers Erik Brynjolfsson and Andrew McAfee argue that Google’s driverless car and IBM’s Watson are examples of a new wave of automation that, drawing on the “exponential growth” in computer power, will change the nature of work in virtually every job and profession. Today, they write, “computers improve so quickly that their capabilities pass from the realm of science fiction into the everyday world not over the course of a human lifetime, or even within the span of a professional’s career, but instead in just a few years.”

In a classic 1983 article in the journal Automatica, Lisanne Bainbridge, an engineering psychologist at University College London, described a conundrum of computer automation. Because many system designers assume that human operators are “unreliable and inefficient,” at least when compared with a computer, they strive to give the operators as small a role as possible. People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are especially bad at. Research on vigilance, dating back to studies of radar operators during World War II, shows that people have trouble maintaining their attention on a stable display of information for more than half an hour. “This means,” Bainbridge observed, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.” And because a person’s skills “deteriorate when they are not used,” even an experienced operator will eventually begin to act like an inexperienced one if restricted to just watching. The lack of awareness and the degradation of know-how raise the odds that when something goes wrong, the operator will react ineptly. The assumption that the human will be the weakest link in the system becomes self-fulfilling.

Psychologists have discovered some simple ways to temper automation’s ill effects. You can program software to shift control back to human operators at frequent but irregular intervals; knowing that they may need to take command at any moment keeps people engaged, promoting situational awareness and learning. You can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than merely observing. Giving people more to do helps sustain the generation effect. You can incorporate educational routines into software, requiring users to repeat difficult manual and mental tasks that encourage memory formation and skill building.

Some software writers take such suggestions to heart. In schools, the best instructional programs help students master a subject by encouraging attentiveness, demanding hard work, and reinforcing learned skills through repetition. Their design reflects the latest discoveries about how our brains store memories and weave them into conceptual knowledge and practical know-how. But most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity. Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience. We pick the program that lightens our load, not the one that makes us work harder and longer. Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.

The small island of Igloolik, off the coast of the Melville Peninsula in the Nunavut territory of northern Canada, is a bewildering place in the winter. The average temperature hovers at about 20 degrees below zero, thick sheets of sea ice cover the surrounding waters, and the sun is rarely seen. Despite the brutal conditions, Inuit hunters have for some 4,000 years ventured out from their homes on the island and traveled across miles of ice and tundra to search for game. The hunters’ ability to navigate vast stretches of the barren Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed explorers and scientists for centuries. The Inuit’s extraordinary way-finding skills are born not of technological prowess—they long eschewed maps and compasses—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, and tides.

Inuit culture is changing now. The Igloolik hunters have begun to rely on computer-generated maps to get around. Adoption of GPS technology has been particularly strong among younger Inuit, and it’s not hard to understand why. The ease and convenience of automated navigation makes the traditional Inuit techniques seem archaic and cumbersome.

But as GPS devices have proliferated on Igloolik, reports of serious accidents during hunts have spread. A hunter who hasn't developed way-finding skills can easily become lost, particularly if his GPS receiver fails. The routes so meticulously plotted on satellite maps can also give hunters tunnel vision, leading them onto thin ice or into other hazards a skilled navigator would avoid. The anthropologist Claudio Aporta, of Carleton University in Ottawa, has been studying Inuit hunters for more than 15 years. He notes that while satellite navigation offers practical advantages, its adoption has already brought a deterioration in way-finding abilities and, more generally, a weakened feel for the land. An Inuit on a GPS-equipped snowmobile is not so different from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to the instructions coming from the computer, he loses sight of his surroundings. He travels “blindfolded,” as Aporta puts it. A unique talent that has distinguished a people for centuries may evaporate in a generation.

Whether it’s a pilot on a flight deck, a doctor in an examination room, or an Inuit hunter on an ice floe, knowing demands doing. One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it. While we’re wrestling with a difficult task, we may be motivated by an anticipation of the ends of our labor, but it’s the work itself—the means—that makes us who we are. Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want? If we don’t grapple with that question ourselves, our gadgets will be happy to answer it for us.



Tuesday, November 12, 2013

How Online Mapmakers Are Helping the Red Cross Save Lives in the Philippines

Hundres of destroyed homes are visible in this aerial photograph
from the Samar province of The Philippines.
As reported by the Atlantic: It will be months before we know the true damage brought about by super typhoon Haiyan. The largest death tolls now associated with the storm are only estimates. Aid workers from across the world are now flying to the island nation, or they just recently arrived there. They—and Filipinos—will support survivors and start to rebuild.

But they will be helped by an incredible piece of technology, a worldwide, crowd-sourced humanitarian collaboration made possible by the Internet.

What is it? It’s a highly detailed map of the areas affected by super typhoon Haiyan, and it mostly didn't exist three days ago, when the storm made landfall.

Since Saturday, more than 400 volunteers have made nearly three quarters of a million additions to a free, online map of areas in and around the Philippines. Those additions reflect the land before the storm, but they will help Red Cross workers and volunteers make critical decisions after it about where to send food, water, and supplies.

These things are easy to hyperbolize, but in the Philippines, now, it is highly likely that free mapping data and software—and the community that support them—will save lives.

The Wikipedia of maps
 

The changes were made to OpenStreetMap (OSM), a sort of Wikipedia of maps. OSM aims to be a complete map of the world, free to use and editable by all. Created in 2004, it now has over a million users.

I spoke to Dale Kunce, senior geospatial engineer at the American Red Cross, about how volunteer mapping helps improve the situation in the Philippines.

The Red Cross, internationally, recently began to use open source software and data in all of its projects, he said. Free software reduces or eliminates project “leave behind” costs, or the amount of money required to keep something running after the Red Cross leaves. Any software or data compiled by the Red Cross are now released under an open-source or share-alike license.

While Open Street Map has been used in humanitarian crises before, the super typhoon Haiyan is the first time the Red Cross has coordinated its use and the volunteer effort around it.

How the changes were made


The 410 volunteers who have edited OSM in the past three days aren't all mapmaking professionals. Organized by the Humanitarian OpenStreetMap Team on Twitter, calls went out for the areas of the Philippines in the path of the storm to be mapped.

What does that mapping look like? Mostly, it involves “tracing” roads into OSM using satellite data. The OSM has a friendly editor which underlays satellite imagery—on which infrastructure like roads are clearly visible—beneath the image of the world as captured by OSM. Volunteers can then trace the path of a road, as they do in this GIF, created by the D.C.-based start-up, Mapbox:
Volunteers can also trace buildings in Mapbox using the same visual editor. Since Haiyan made landfall, volunteers have traced some 30,000 buildings.

Maps, on the ground 


How does that mapping data help workers on the ground in the Philippines? First, it lets workers there print paper maps using OSM data which can be distributed to workers in the field. The American Red Cross has dispatched four of its staff members to the Philippines, and one of them—Helen Welch, an information management specialist—brought with her more than 50 paper maps depicting the city of Tacloban and other badly hit areas.
The red line shows the path of super typhoon Haiyan and the colored patches
show where volunteers made additions to OpenStreetMap this weekend.  Notice
the extent of the edits in Tacloban, a city of more than 220,000 that bore the brunt
of the storm. (American Red Cross)
Those maps were printed out on Saturday, before volunteers made most of the changes to the affected area in OSM. When those, newer data are printed out on the ground, they will include almost all of the traced buildings, and rescuers will have a better sense of where “ghost” buildings should be standing. They’ll also be on paper, so workers can write, draw, and stick pins to them.

Welch landed 12 hours ago, and Kunce said they “had already pushed three to four more maps to her.”
A part of the city of Tacloban before and after it was mapped by the Humanitarian
OSM Team. Roads, buildings, and bodies of water were missing before volunteers
added them.

The Red Cross began to investigate using geospatial data after the massive earthquake in Haiti in 2010. Using pre-existing satellite data, volunteers mapped almost the entirety of Port-au-Prince in OSM, creating data which became the backbone for software that helped organize aid and manage search-and-rescue operations.

That massive volunteer effort convinced leaders at the American Red Cross to increase the staff focusing on their digital maps, or geographic information systems (GIS). They've seen a huge increase in both the quality and quantity of maps since then.

But that’s not all maps can do.

The National Geospatial-Intelligence Agency (NGA), operated by the U.S. Department of Defense, has already captured satellite imagery of the Philippines. That agency has decided where the very worst damage is, and has sent the coordinates of those areas to the Red Cross. But, as of 7 p.m. Monday, the Red Cross doesn’t have that actual imagery of those sites yet.

The goal of the Red Cross geospatial team, said Kunce, was to help workers “make decisions based on evidence, not intuition.” The team “puts as much data in the hands of responders as possible.” What does that mean? Thanks to volunteers, the Red Cross knows where roads and buildings should be. But until it gets the second set of data, describing the land after the storm, it doesn't know where roads and buildings actually are. Until it gets the new data, its volunteers can’t decide which of, say, three roads to use to send food and water to an isolated village.

Right now, they can’t make those decisions.

Kunce said the U.S. State Department was negotiating with the NGA for that imagery to be released to the Red Cross. But, as of publishing, it’s not there yet.

When open data advocates discuss data licenses, they rarely discuss them in terms of life-and-death. But, every hour that the Red Cross does not receive this imagery, better decisions cannot be made about where to send supplies or where to conduct rescues.

And after that imagery does arrive, OSM volunteers around the world can compare it to the pre-storm structures, marking each of the 30,000 buildings as unharmed, damaged, or destroyed. That phase, which hasn’t yet begun, will help rescuers prioritize their efforts.

OSM isn’t the only organization using online volunteers to help the Philippines: MicroMappers, run by a veteran of OSM efforts in Haiti, used volunteer-sorted tweets to determine areas which most required relief. Talking to me, Kunce said the digital “commodification of maps” generally had contributed to a flourishing in their quantity and quality across many different aid organizations.

“If you put a map in the hands of somebody, they’re going to ask for another map,” said Kunce. Let’s hope the government can put better maps in the hands of the Red Cross—and the workers on the ground—soon.

Monday, November 11, 2013

Cattle Ranchers Track Wolves with GPS, Computers

As reported by the Spokesman ReviewBefore the sun breaks over the mountains, Leisa Hill is firing up a generator in a remote cow camp in eastern Stevens County.


Soon she’ll be poring over satellite data points on her laptop, tracking the recent wanderings of a GPS-collared wolf.
Hill is a range rider whose family grazes 1,300 head of cattle in the Smackout pack’s territory. Knowing the collared wolf’s whereabouts helps her plan her day.
She’ll spend the next 12 to 16 hours visiting the scattered herd by horseback or ATV. Through the regular patrols, she’s alerting the Smackout pack that cattle aren't easy prey.
Her work is paying off. Last year, 100 percent of the herd returned from the U.S. Forest Service allotments and private pastures that provide summer and fall forage. This year’s count isn't final, but the tallies look promising, said Hill’s dad, John Dawson.
“We've lost nothing to wolves,” he said.
Hill’s range rider work is part of a pilot that involves two generations of a northeastern Washington ranch family, the state and Conservation Northwest. The aim is to keep Washington’s growing wolf population out of trouble.
Last year, government trappers and sharpshooters killed seven members of the Wedge pack for repeatedly attacking another Stevens County rancher’s cattle.
That short-term fix came at a high political price: The state Department of Fish and Wildlife received 12,000 emails about the decision, mostly in opposition. Two wolves have again been spotted in the Wedge pack’s territory, either remnants of the original pack or new wolves moving in.
It upped the ante for all sides to be proactive.

Ranchers can’t fight public opinion

Many Washington residents want wolves, said Dawson, a 70-year-old rancher whose son, Jeff, also runs a Stevens County cattle operation.
“I can’t fight that,” John Dawson said of public opinion. “You have to meet in the middle; you have no choice.
“We put most of our cattle in wolf territory for the summer,” he said. “I've been trying to learn as much as possible about wolves so we can meet them at the door.”
For ranchers, “it’s a new business now, a new world,” said Jay Kehne of Conservation Northwest, a Bellingham-based environmental group that works on issues across Washington and British Columbia.
Conservation Northwest supported last year’s controversial decision to remove the Wedge pack. “We wanted to do what we felt was scientifically right, what was supported by the evidence, what people knowledgeable about cattle and wolf behavior were telling us,” Kehne said.
But the organization obviously prefers preventive, nonlethal measures, he said. Conservation Northwest had talked to Alberta and Montana cattle ranchers who use range riders and was looking for Washington ranchers willing to try it. The Dawsons were interested.
Conservation Northwest helps finance three range riders in Washington – the Dawsons in Stevens County, and others in Cle Elem and Wenatchee.
Hiring a range rider costs $15,000 to $20,000 for the five-month grazing season, Kehne said. The state and individual ranchers, including Dawson, also contribute to the cost.
In addition, the state Department of Fish and Wildlife provides daily satellite downloads on GPS-collared wolves to help range riders manage the cows.
Collared wolves are known as “Judas wolves” for betraying the pack’s location.
The downloads give the wolves’ locations for the past 24 hours, though the system isn’t foolproof, said Jay Shepherd, a state wildlife conflict specialist. Dense stands of trees can block signals, and the timing of satellite orbits affects data collection.
Last winter, the state captured and collared three wolves in the Smackout pack. One of the collars has a radio-based signal that can be detected when the wolf is nearby. The other two wolves received GPS collars. One of the collars has stopped working. The remaining GPS collar is on a young male that doesn’t always stay with the pack.
Ranchers must sign an agreement to access the satellite downloads. “They understand it is sensitive data that’s not to be shared,” said Stephanie Simek, the state’s wildlife conflict section manager.
GPS tracking adds a high-tech element to modern range riding, but much of it is still grunt work. The Smackout pack’s territory covers about 400 square miles. John and Jeff Dawson’s cattle graze 10 to 15 percent of the pack’s territory, but their range encompasses the heart of it.
Leisa Hill’s work starts in early June, when the cows and calves are turned loose on Forest Service allotments and private pastures. The range riding continues through 100-degree August days and wraps up in early November after the first snowfall.
She travels nearly 1,000 miles each month by horse and ATV through thick timber to reach scattered grazing areas. She watches for bunched or nervous cows, as well as sick or injured animals that wolves might consider easy prey.
She’s also alert to patterns in the wolves’ movements. Regular visits to a particular site probably indicate the presence of a carcass.
Hill has fired noise-makers to scare off adult wolves that were in the same pasture as cows. Last year, she spotted four wolf pups on the road.
The 46-year-old prefers to stay in the background, declining to be interviewed for this story. However, “the success of this range rider program is because of Leisa,” her father said. “She knows the range and she understands cow psychology.”

Skinny calves mean a financial loss

On a recent fall morning, John Dawson drove a pickup over Forest Service roads past small clusters of Black Angus, Herefords and cream-colored Charolais cows with their calves.
The cows were just how he likes to see them: relaxed, spread out and eating. Calves should be putting on 2 to 3 pounds a day.
“When they’re not laying around, resting and eating, they’re not gaining,” he said.
Dawson heard his first wolf howl in 2011, the year before the range rider pilot started. He and his son lost seven calves that summer, though they couldn’t find the carcasses to determine cause of death.
The remaining calves were skinnier than usual. They probably spent the summer on the run from wolves, or tightly bunched together and not making good use of the forage, Dawson said. For ranchers, skinny calves can be a bigger financial blow than losing animals.
Say a rancher has 500 calves and they each come in 40 pounds lighter than normal. At a market price of $1.50 per pound, “that’s a bigger loss ($30,000) than losing seven calves, which is about a $5,000 loss,” he said.
Over the past two years, the Dawsons have seen robust weight gain in their calves. They credit the range rider program.
Earlier this year, Jeff Dawson and Shepherd, the state wildlife conflict specialist, talked with Klickitat County cattle ranchers. Wolves have been spotted in south-central Washington, and some of those ranchers are starting to experiment with range riders.
“The success the Dawsons have had has gone a long way to helping promote nonlethal means and proactive measures to reduce conflict,” said Jack Field, the Washington Cattlemen’s Association’s executive vice president.
If ranchers take extra steps to protect their animals, the public is more likely to accept the occasional need to kill wolves that repeatedly attack livestock, said Conservation Northwest’s Kehne.
John Dawson and his wife, Melva, spent decades building their ranch, working other jobs while they grew the herd. To preserve that legacy, the family was willing to try new ways of doing business, he said.
“I think (range riding) would work for a good share of other ranchers,” he said. But “they have to be open-minded enough to want it to work.”

GPS Navigation Payload Headed in Right Direction

As reported by Space NewsProblems with the Exelis-built navigation payload on the U.S. Air Force’s next generation of positioning, navigation and timing satellites appear to be solved, according to a company spokeswoman.


Gen. William Shelton, commander of Air Force Space Command, said in September that the GPS 3 navigation payload had no firm delivery date due to manufacturing and processing issues. While the payload’s woes had not yet delayed the GPS 3 program schedule, “we’re running right up against our margins,” Shelton said at the time.
Exelis Geospatial Systems of Rochester, N.Y., is developing the GPS 3 system’s main navigation payload, a role it has had from the beginning of the Lockheed Martin-led program. Exelis spokeswoman Jane Khodos told SpaceNews Nov. 6 “the known technical issues have been resolved.”
Khodos said the navigation payload for the first GPS 3 satellite has been built and is currently being tested with an expected delivery sometime in spring 2014. 
The “navigation payload delays have been driven by first-time development and integration issues, including design changes to eliminate signal crosstalk,” she said. Crosstalk occurs when a signal is broadcast on one circuit and creates an undesired effect on another circuit. GPS 3 will carry a new civil signal that is designed to work with other international global navigation satellite systems.
“GPS 3 will meet all mission and quality requirements,” Khodos said. “Lockheed Martin and Exelis are taking every step necessary to execute successfully, and are rigorously testing the first space vehicle navigation payload to ensure the quality of the GPS 3 design.” 
Denver-based Lockheed Martin Space Systems is the prime contractor on GPS 3, which will feature improved accuracy and better resistance to jamming and other forms of interference than previous generations of GPS craft. Currently the Air Force has eight GPS 3 satellites either fully or partially under contract with Lockheed Martin, and the service earlier this year signaled its intent to order another 12 from the incumbent contractor. But Shelton has said the GPS-3 program’s future is a “question mark,” and that the service may look to try out “alternative architectures” for space-based navigation. 
In December 2012, Exelis announced it had integrated and performed initial testing of a payload aboard a prototype GPS 3 satellite.
The GPS 3 satellites currently are slated to start launching in 2015.
Meanwhile, Exelis announced Nov. 4 that software used to simulate the behavior of GPS signals in space and better understand the satellites’ exact position, completed factory testing. The system will be used as part of the GPS Operational Control Segment (OCX), built by Raytheon Intelligence and Information Systems of Aurora, Colo.
The OCX is expected to support the GPS 3 constellation’s stringent accuracy, anti-jam and information assurance requirements. The system also will be backward compatible with the current generation of GPS satellites.

Sunday, November 10, 2013

All About Beamforming, the Faster Wi-Fi You Didn't Know You Needed

As reported by PC WorldBeamforming is one of those concepts that seem so simple that you wonder why no one thought of it before. Instead of broadcasting a signal to a wide area, hoping to reach your target, why not concentrate the signal and aim it directly at the target?


Sometimes the simplest concepts are the most difficult to execute, especially at retail price points. Fortunately, beamforming is finally becoming a common feature in 802.11ac Wi-Fi routers (at least at the high end). Here’s how it works.
First, a bit of background: Beamforming was actually an optional feature of the older 802.11n standard, but the IEEE (the international body that establishes these standards) didn’t spell out how exactly it was to be implemented. The router you bought might have used one technique, but if the Wi-Fi adapter in your laptop used a different implementation, beamforming wouldn’t work.
Some vendors developed pre-paired 802.11n kits (with Netgear’s WNHDB3004 Wireless Home Theater Kit being one of the best examples), but these tended to be expensive, and they never had much of an impact on the market.
The IEEE didn’t make the same mistake with the 802.11ac standard that’s in today’s high-end devices. Companies building 802.11ac products don’t have to implement beamforming, but if they do, they must do so in a prescribed fashion. This ensures that every company’s products will work together. If one device (such as the router) supports beamforming, but the other (such as the Wi-Fi adapter in your router) doesn’t, they’ll still work together. They just won’t take advantage of the technology.
Beamforming can help improve wireless bandwidth utilization, and it can increase a wireless network’s range. This, in turn, can improve video streaming, voice quality, and other bandwidth- and latency-sensitive transmissions.
Beamforming is made possible by transmitters and receivers that use MIMO (multiple-input, multiple-output) technology: Data is sent and received using multiple antennas to increase throughput and range. MIMO was first introduced with the 802.11n standard, and it remains an important feature of the 802.11ac standard.

How beamforming works

Wireless routers (or access points) and wireless adapters that don’t support beamforming broadcast data pretty much equally in all directions. For a mental picture, think of a lamp without a shade as the wireless router: The bulb (transmitter) radiates light (data) in all directions.
Devices that support beamforming focus their signals toward each client, concentrating the data transmission so that more data reaches the targeted device instead of radiating out into the atmosphere. Think of putting a shade on the lamp (the wireless router) to reduce the amount of light (data) radiating in all directions. Now poke holes in the shade, so that concentrated beams of light travel to defined locations (your Wi-Fi clients) in the room.
If the Wi-Fi client also supports beamforming, the router and client can exchange information about their respective locations in order to determine the optimal signal path. Any device that beamforms its signals is called a beamformer, and any device that receives beamformed signals is called a beamformee.

Netgear's Beamforming+

As mentioned earlier, beamforming support is an optional element of the 802.11ac standard, and any vendor offering it must support a specific technique. But the vendor can also offer other types of beamforming in addition to that standard technique.
Netgear’s Beamforming+ is a superset of the beamforming technique defined in the 802.11ac standard, so it’s interoperable with any other 802.11ac device that also supports beamforming. But Beamforming+ does not require the client device to support beamforming, so you could see range and throughput improvements by pairing one of Netgear’s routers (specifically, Netgear’s model R6300, R6200, and R6250) with any 5GHz Wi-Fi device (Netgear’s R7000 Nighthawk router also supports beamforming on its 2.4GHz network).
Netgear is not the only router manufacturer to support beamforming, of course. It’s becoming a common feature on all of the higher-end Wi-Fi routers and access points. If you’re in the market and want a router that supports beamforming, check the router’s specs on the box or at the vendor’s website. Here are three other routers you might consider: the Linksys EA6900, the D-Link DIR-868L, and the Trendnet TEW-812DRU.