Search This Blog

Friday, March 28, 2014

Survey: More Than 1 In 4 Car Crashes Involve Cellphone Use

As reported by CBS NY: Texting and driving is dangerous but a new survey finds talking on a cellphone while behind the wheel may be even worse.

As WCBS 880′s Paul Murnane reported from Stamford, the National Safety Council’s annual report found 26 percent of all crashes are tied to phone use, but noted just 5 percent involved texting.

Safety advocates are lobbying now for a total ban on driver phone use, pointing to studies that headsets do not reduce driver distraction.

Some motorists said they support the idea.

“Everybody’s on a telephone. If people do cut you off, you look and they’re talking on the telephone. I think they are a problem.” a driver told Murnane. “Hands-free or not.”

“People just get too involved in the conversation. Either pull over or wait,” said another man.

The survey found a 1 percent increase in cellphone-involved accidents compared with the previous year.

A spokesperson for the non-profit Governors Highway Safety Association told Marketwatch.com that it may be that drivers are more comfortable calling than texting in a moving vehicle.

The group believes the data on distracted crashes is underreported.

Thursday, March 27, 2014

Taxis 2.0: Streamlining City Transport With Graph Theory

Big city taxi systems could be 40% more efficient with device
enabled taxi sharing.
As reported by the Medium: Everything today is about information and algorithms for processing it. Think of what Google’s PageRank algorithm did for web search, transforming an impenetrable jungle of web pages into an easy-to-use and hugely powerful information resource. That was then, in the late 1990s. Now think taxis.

In New York City, people take more than 100 million taxi trips every year, as individual parties hail cabs or book them by phone to suit their own needs. Taxis, as a result, criss-cross the city in a tangle of disorganized mayhem. Cabs run in parallel up and down Madison Avenue, often carrying isolated people along the same path. Those people could share a cab, yet lack a mechanism to achieve that coordination. But that mechanism might soon exist, and it could make taxi transport everywhere a lot more efficient.

That’s the message of some fascinating new research by a group of network theorists (the paper is unpublished, currently under review at a journal). It’s fairly mathematical, and relies on some technical results from graph theory (as did Google’s PageRank algorithm), but the basic insight from which it starts is quite simple: a good fraction of the trips that taxis take in a city overlap, at least partially, and so present opportunities for people to share cabs. Using real data from NYC, they’ve shown that a simple algorithm can calculate which rides could easily be shared by two parties without causing either much delay. In principle, the algorithm could be exploited by smart phones to help people organize themselves — it could make NYC taxi system 40% more efficient, reducing miles traveled, pollution, and costs.

7 days of taxi traffic history.
A little more detail: Imagine that you label every taxi ride by its origin and destination, plus departure and arrival time. Represent each such ride by a point on some very big page. For NYC in 2011, there were some 150 million rides starting or ending in Manhattan, so imagine a page with 150 million points on it, each labelled by the above data. These points, in effect, show you all the taxi rides that took place to get people in NYC to the places they wanted to go. 

What Santi and colleagues do is to ask whether some of these rides might have been “shareable,” in the sense that they actually traveled along parallel routes (or between the same points, along different routes) at close to the same time. If so, then people given the right knowledge could have shared a portion of the trip.

This huge collection of points becomes a mathematical graph once you begin linking together the points for any pair of rides that are “shareable.” By studying the properties of this graph, the researchers show that if people were willing to be delayed by up to ten minutes on their journeys, then there are roughly 100 billion pairs of trips that were shareable. If people are more choosy — unwilling to accept more than a five minutes delay — then fewer rides become shareable, but still enough to reduce the total miles driven by taxis by 40%. That 40% might be slightly optimistic, they note, given the constraints on any network system attempting to process this information in real time (and thereby having less than perfect knowledge of the whole set of taxi journeys).

The point is that people make their decisions about taxis in an information vacuum, knowing only what they require themselves, and nothing about all the other taxi needs of others around them. Information vacuum isn't good. How many times have you needed a taxi and waited as ten of them zipped by, all traveling in your direction, with each one carrying just one or two passengers? For a few minute delay, many of those people might have been wiling to save some money by sharing part of the ride. To make this happen requires information and algorithms to sift through it, plus a means for sharing this information with everyone who wants it.

All of which may be a reality soon. For more detail on this work, see the website of the HubCab project.

Wednesday, March 26, 2014

Galileo Conjures Up Improbable GPS Cutoff Scenario

As reported by Space News: The European Commission’s argument that its Galileo satellite positioning, navigation and timing program is a hedge against the day when the U.S. government arbitrarily shuts off GPS — for whatever reason — has been a driving political motivation for Galileo since the project’s beginning in the mid-1990s.

So has the idea that GPS, which is funded mainly by the U.S. Defense Department, should be seen as inherently unreliable for non-military users compared to compared to Galileo, which is 100 percent financed by civil authorities.

U.S. government officials — military and civil — have gone hoarse over the years explaining that GPS has been formally declared a dual-use system overseen by a civil-military board. The infrastructure, often described as a global utility, generates thousands of jobs and billions in annual commercial revenue and underpins the global financial system in addition to being the default positioning and navigation service for the NATO alliance.

A scenario in which GPS would be simply shut down — outside of limited-area jamming during a war — is inconceivable, they say. Despite these assurances, and perhaps because of Galileo’s unstable financial history, the commission continues to wave the GPS-shutoff-threat shibboleth.

Here is an example of it from the commission’s “Why we need Galileo” brochure:
“How secure is your security?



“From the beginning, the American GPS system has been aimed at providing a key strategic advantage to the U.S. and allied military troops on the battlefield. Today, the free GPS signal is also used around the world by security forces such as the police.

“Stieg Hansen is a retired military officer from Malmo now representing a large producer of security systems. Today he is speaking to a group of people at an important trade show. Behind him, a bold sign reads, ‘GPS for Security.’ His audience includes a number of stern men and women, and one person who looks like a journalist.

“‘The Stryker, as we like to call it in the field, is the hand-held GPS receiver for domestic security.’ Brandishing a notebook-sized electronic device, he continues: ‘This little baby has all the hardware you will ever need to locate, mobilize and coordinate your security team, wherever they may be.’
“Someone in the audience calls out: ‘What if GPS gets cut off?’

“Hansen hesitates, does not look at the person asking the question, then continues: ‘Most European governments have placed restrictions on the sale and use of this little baby, due to the powerful electronics inside. Very robust, very difficult to jam.’

“‘What if the little baby can’t get GPS?’

“‘This little bab…,’ Hansen stops short before finishing the word. ‘This device’s primary mission is to provide positioning support, velocity, navigation and timing to all land-based security operations, including police forces in pursuit of criminals or transporting dangerous prisoners, border guards in anti-smuggling operations and…’

“‘He’s not answering the question,’ someone murmurs. Other members of the audience are now looking at each other. One person says to his neighbor, ‘That’s right. What if GPS stops working?’ Hansen takes a step backwards.”

The pamphlet ends by saying: “The stories presented in this brochure are fictitious. Any resemblance to real events or persons is purely coincidental.

Using Adaptive Notch Filters To Reduce The Effects Of GPS Jamming


As reported by Inside GNSS: GNSS jammers are small portable devices able to broadcast powerful disruptive signals in the GNSS bands. A jammer can overpower the much weaker GNSS signals and disrupt GNSS-based services in a geographical area with a radius of several kilometers. Despite the fact that the use of such devices is illegal in most countries, jammers can be easily purchased on the Internet and their rapid diffusion is becoming a serious threat to satellite navigation.

Several studies have analyzed the characteristics of the signals emitted by GNSS jammers. From the analyses, it emerges that jamming signals are usually characterized by linear frequency modulations: the instantaneous frequency of the signal sweeps a range of several megahertz in a few microseconds, affecting the entire GNSS band targeted by the device.

The fast variations of their instantaneous frequency make the design of mitigation techniques particularly challenging. Mitigation algorithms must track fast frequency variations and filter out the jamming signals without introducing significant distortions on the useful GNSS components. The design problem becomes even more challenging if only limited computational resources are available.

We have analyzed the ability of an adaptive notch filter to track fast frequency variations and mitigate a jamming signal. In this article, we begin by briefly describing the structure of the selected adaptive notch filter along with the adaptive criterion used to adjust the frequency of the filter notch.

When the adaptation parameters are properly selected, the notch filter can track the jamming signals and significantly extend the ability of a GNSS receiver to operate in the presence of jamming. Moreover, the frequency of the filter notch is an estimate of the instantaneous frequency of the jamming signal. Such information can be used to determine specific features of the jamming signal, which, in turn, can be used for jammer location using a time difference of arrival (TDOA) approach.

The capabilities of the notch filter are experimentally analyzed through   a series of experiments performed in a large anechoic chamber. The experiments employ a hardware simulator to broadcast GPS and Galileo signals and a real jammer to disrupt GNSS operations. The GNSS and interfering signals were recorded using an RF signal analyzer and analyzed in post-processing. We processed the collected samples using the selected adaptive notch filter and a custom GNSS software receiver developed in-house.

The use of mitigation techniques, such as notch filtering, significantly improves the performance of GNSS receivers, even in the presence of strong and fast-varying jamming signals. The presence of a pilot tone in the Galileo E1 signal enables pure phase-locked loop (PLL) tracking and makes the processing of Galileo signals more robust to jamming.

Adaptive Notch Filter
Several interference mitigation techniques have been described in the technical literature and are generally based on the interference cancellation principle. These techniques attempt to estimate the interference signal, which is subsequently removed from the input samples. For example, transform domain excision techniques at first project the input signal onto a domain where the inference signal assumes a sparse representation. (See the articles by J. Young et alia and M. Paonni et alia, referenced in the Additional Resources section near the end of this article.) The interference signal is then estimated from the most powerful coefficients of the transformed domain representation. The interfering signal is removed in the transformed domain, and the original signal representation is restored.

When the interfering signal is narrow band, discrete Fourier transform (DFT)-based frequency excision algorithms, described in the article by J. Young and J. Lehnert, are particularly effective. Transform domain excision techniques are, however, computationally demanding, and other mitigation approaches have been explored. For example, notch filters are particularly effective for removing continuous wave interference (CWI). M. Paonni et alia, cited in Additional Resources, considered the use of a digital notch filter for removing CWI, the center frequency of which was estimated using the fast Fourier transform (FFT) algorithm. Despite the efficiency of the FFT algorithm, this approach can result in a significant computational burden and alternative solutions should be considered.

The article by M. Jones described a finite impulse response (FIR) notch filter for removing unwanted CW components and highlighted the limitations of this type of filter. Thus, we adopted an infinite impulse response (IIR) structure and experimentally demonstrated its suitability for interference removal. In particular we considered the adaptive notch filter described in the article by D. Borio et alia listed in Additional Resources and investigated its suitability for mitigating the impact of a jamming signal.

http://www.insidegnss.com/auto/popupimage/WPEqTab.jpg This technique has been selected for its reduced computational requirements and for its good performance in the presence of CWI. Note that the notch filter under consideration has been extensively tested in the presence of CWI; however, its performance in the presence of frequency-modulated signals has not been assessed. Also, note that removing a jamming signal poses several challenges that derive from the swept nature of this type of interference. (For details, see the paper by R. H. Mitch et alia.)


Jamming signals are usually frequency modulated with a fast-varying center frequency. The time-frequency evolution of the signal transmitted by an in-car GPS jammer is provided as an example in Figure 1. The instantaneous center frequency of the jamming signal sweeps a frequency range of more than 10 megahertz in less than 10 microseconds. The adaptation criterion selected for estimating the center frequency of the jamming signal has to be sufficiently fast to track these frequency variations.

The notch filter considered in this work is characterized by the following transfer function (illustrated on the opening page of this article) 

Equation 1 (for equations see inset photo, above right)
where kα is the pole contraction factor and z0[n] is the filter zero. kα controls the width of the notch introduced by the filter, whereas z0[n] determines the notch center frequency. Note that z0[n] is progressively adapted using a stochastic gradient approach described in the textbook by S. Haykin with the goal of minimizing the energy at the output of the filter. A thorough description of the adaptation algorithm can be found in the article by D. Borio et alia.

The notch filter is able to place a deep null in correspondence with the instantaneous frequency of narrow band interference and, if the zero adaptation parameters are properly chosen, to track the interference frequency variations. The energy of the filter output is minimized when the filter zero is placed in correspondence with the jammer instantaneous frequency 

Equation 2
where Φ(nTs) is the jammer instantaneous frequency and fs = 1/Ts is the sampling frequency.

This implies that z0[n] can be used to estimate the instantaneous frequency of the interfering signal. The magnitude of z0[n] also strongly depends on the amplitude of the interfering signal. Indeed, |z0[n]| approaches one as the amplitude of the jamming signal increases. Thus, |z0[n]| can be used to detect the presence of interference, and the notch filter activates only if |z0[n]| passes a predefined threshold, Tz. A value of Tz= 0.75 was empirically selected for the tests described in the following section.

http://www.insidegnss.com/auto/popupimage/WPFig1_2.jpgExperimental Setup and Testing
To test the capability of the adaptive notch filter to mitigate against a typical in-car jammer, we conducted several experiments in a large anechoic chamber at the Joint Research Centre (JRC) of the European Commission.


Figure 2 provides a view of the JRC anechoic chamber where the jamming tests were conducted. The anechoic chamber offers a completely controlled environment in which all sources of interference besides the jammer under test can be eliminated.

The experimental setup is similar to that employed to test the impact of LightSquared signals on GPS receivers (For details, see the article by P. Boulton et alia listed in Additional Resources). We used a simulator to provide a controlled GPS and Galileo constellation, with a static receiver operating under nominal open-sky conditions. The GNSS signals were broadcast from a right hand circular polarization (RHCP) antenna mounted on a movable sled on the ceiling of the chamber. A survey grade GNSS antenna was mounted inside the chamber, and the sled was positioned at a distance of approximately 10 meters from this antenna. The GNSS receiving antenna was connected via a splitter to a spectrum analyzer, an RF signal analyzer, and a commercial high sensitivity GPS receiver. Table 1 (see inset photo, above right) lists the RF signal analyzer parameters.

To provide the source of jamming signals a commercially available (though illegal) in-car jammer was connected to a programmable power supply. We removed the jammer’s antenna and connected the antenna port, via a programmable attenuator with up to 81 decibels of attenuation, to a calibrated standard gain horn antenna. This gain horn was positioned at approximately two meters from the GNSS receiving antenna.

The goal of this configuration was to permit variation of the total jammer power received at the antenna.
Unfortunately, the jammer itself is very poorly shielded; so, a significant amount of the interfering power seen by the receiver was found to come directly from the body of the jammer, rather than through the antenna.
To minimize this effect, we exercised great care to shield the jammer as much as possible from the GNSS antenna. We placed the jammer body in an aluminum box, which was subsequently surrounded by RF absorbent material. The jammer body and the receiving GNSS antenna were separated by approximately 15 meters, thereby ensuring approximately 60 decibels of free space path loss.

The experiment was controlled via a PXI controller, which generated synchronous triggers for the RF data collection and simulator signal generation, controlled the power supplied to the jammer, and updated the attenuation settings according to a desired profile. All events (trigger generation, jammer power on/off, attenuation setting) were time stamped using an on-board timing module. The commercial receiver was configured to log raw GPS measurements including carrier-to-noise (C/N0) values.

http://www.insidegnss.com/auto/popupimage/WPFig3_4_2.jpg The experimental procedure involved two trials, each lasting approximately 40 minutes. In the first trial, the simulator and data collection equipment were both enabled, but the jammer remained powered off. In the second trial, the same scenario was generated in the simulator, the data collection equipment was enabled and, after a period of three minutes, the jammer was powered on.

We initially set the attenuation to its maximum value of 81 decibels. We subsequently reduced this in two-decibel decrements to a minimum value of 45 decibels. We maintained each level for a period of 60 seconds. Finally, we again increased the attenuation in two-decibel increments to its maximum value. Figure 3 presents this attenuation profile.

We performed a calibration procedure whereby the total received jammer power at the output of the active GNSS receiving antenna was measured using a calibrated spectrum analyzer while the attenuation level was varied. Further, the total noise power was measured in the same 12-megahertz bandwidth with the jammer switched off. This permitted the computation of the received jammer-to-noise density power ratio (J/N0) as a function of the attenuator setting.

Figure 3 also shows the calibrated J/N0 at the output of the active GNSS antenna as a function of time. The analysis provided in the next section is conducted as a function of the J/N0.

Sample Results
This section provides sample results obtained using the adaptive notch filter described earlier. In particular, the loss in C/N0 experienced by the GPS and Galileo software receivers used for analysis is experimentally determined as a function of the J/N0.

The adaptive notch filter is used to reduce the C/N0 loss. Figure 4 shows the loss in C/N0 experienced in the presence of the jammer as a function of J/N0. The first curve arises from software receiver processing of the GPS signals, the second plot from software receiver processing of the Galileo signals, and the third from the commercial high sensitivity receiver that processed only the GPS signals.

Note the small difference between the GPS and Galileo results. This is to be expected due to the wideband nature of the jammer. In fact, for both GPS and Galileo processing the jammer is effectively averaged over many chirp periods, thereby giving it the appearance of a broadband (white) noise source. The one difference between the GPS and Galileo signals is that the tracking threshold of the Galileo signals is approximately six decibels lower than that for the GPS signals. This is due to the use of a pure PLL processing strategy using only the E1C (pilot) component of the Galileo signal.

The other interesting point to note from Figure 4 is that the commercial receiver exhibits better resilience against the jammer. This is most likely due to a narrower front-end bandwidth in the commercial receiver, although this cannot be confirmed because the receiver manufacturer does not provide this information.

From the time-frequency evolution of the jamming signal used for the experiment and shown in Figure 1, it emerges that the bandwidth of the jamming component is approximately 10 megahertz. If the commercial receiver had a smaller bandwidth, then it would effectively filter out some of the jammer power, thereby improving its performance with respect to the software receiver results.

Figure 4 provides an indication of the performance degradation caused by a jamming signal when no mitigation technique is employed. The notch filter is expected to improve the receiver performance. The improvement depends on the filter parameters and their ability to track the jammer’s rapid frequency variation.

Two configurations of the adaptive notch filter were tested: kα = 0.8 and kα = 0.9. The first case has a smaller contraction factor and, hence, a wider notch than the latter.

http://www.insidegnss.com/auto/popupimage/WPFig5_6.jpg The adaptive step size of the stochastic gradient algorithm was tuned for the jammer under consideration. (The adaptation of the filter zero must be fast to track the frequency variations of the jammer’s chirp signal.) In each case the magnitude of the zero of the notch filter was used as a detector for interference. We chose a threshold of 0.75 so that when the amplitude of the zero was greater than this threshold, the notch filter was enabled and the receiver processed this filtered data. Otherwise the receiver processed the raw data collected from the antenna.

Figure 5 and Figure 6 illustrate the results of the filtering for the two cases. In these plots, the upper portion shows the time evolution of the frequency content of the raw data, with the frequency estimate of the notch filter superimposed as a dashed red line. The lower plots show the time evolution of the frequency content of the filtered data. From these lower plots the wider notch appears to do a better job of removing the jammer signal. On the other hand, this will also result in a greater reduction of the useful signal power.

http://www.insidegnss.com/auto/popupimage/WPFig7_8.jpg The effect of the notch filter on the reception of GNSS signals in terms of the C/N0 degradation is illustrated in Figure 7 and Figure 8 for Galileo and GPS signals, respectively. Again, the difference between the impact on GPS and Galileo signals is slight, due to the wideband nature of the interferer. On the other hand, the benefit of the notch filter is clear in both figures. The sidebar, “Track the Jamming Signal,” (at the end of this article) provides access to data and tools with which readers can test different configurations of the notch filters themselves.

Interestingly, it appears that two limiting curves exist, one for the case of no filtering and one for the case where a notch filter is applied. The variation in the contraction factor (over the range considered) has little effect on the C/N0 effectively measured by the GPS and Galileo software receivers.

The separation between the two curves is approximately five decibels, i.e., the receiver that applies the notch filter experiences approximately five decibels less C/N0 loss than an unprotected receiver for the same J/N0. Of course, we must remember that this result applies for the data collection system considered in this test, which consists of a 14-bit analog-to-digital converter (ADC) with no automatic gain control (AGC). In commercially available receivers with a limited number of bits for signal quantization the non-linear losses due to the combination of these two front-end components will likely lead to additional losses.

Conclusion
We have proposed an IIR adaptive notch filter as an easy means to implement mitigation technique for chirp signals typical of the type of commercially available jammers that have become ever more present in recent years. A simple stochastic gradient adaptation algorithm was implemented, with an associated simple interference detection scheme. Our analysis showed that, for a receiver with sufficient dynamic range, the proposed technique leads to an improvement of approximately five decibels in terms of effective C/N0.
We tested the proposed scheme on data collected from a low-cost commercial jammer in a large anechoic chamber. We used a software receiver to process both GPS and Galileo signals. The broadband nature of the chirp signal means that its effect on GNSS signal processing is similar to an increase in the thermal noise floor. Hence, the impact is very similar on both GPS and Galileo receivers. On the other hand, the chirp signal is instantaneously narrowband, a feature that is exploited by the use of a notch filter with a highly dynamic response to variations in the frequency of the interferer.

Acknowledgment
This study is mainly based on the paper “GNSS Jammers: Effects and Countermeasures” presented by the authors at the Satellite Navigation Technologies and European Workshop on GNSS Signals and Signal Processing, (NAVITEC), December 2012.

Additional Resources
[1]
Borio, D., Camoriano, L., and Lo Presti, L., “Two-pole and Multi-pole Notch Filters: A Computationally Effective Solution for GNSS Interference Detection and Mitigation,” IEEE Systems Journal, Vol. 2, No. 1, pp. 38–47, March 2008
[2]
Boulton, P., Borsato, R., and Judge, K., “GPS Interference Testing, Lab, Live, and LightSquared,” Inside GNSS, pp. 32-45, July/August 2011
[3]
Haykin, S., Adaptive Filter Theory, 4th ed., Prentice Hall, September 2001
[4]
Jones, M., “The Civilian Battlefield, Protecting GNSS Receivers from Interference and Jamming,” Inside GNSS, pp. 40-49, March/April 2011
[5]
Mitch, R. H., Dougherty, R. C., Psiaki, M. L., Powell, S. P., O’Hanlon, B. W., Bhatti, J. A., and Humphreys, T. E., “Signal Characteristics of Civil GPS Jammers,” Proceedings of the 24th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS 2011), Portland, OR, pp. 1907–1919, September 2011
[6]
Paonni, M., Jang, J., Eissfeller, B., Wallner, S., Avila-Rodriguez, J. A., Samson, J., and Amarillo- Fernandez, F., “Wavelets and Notch Filtering, Innovative Techniques for Mitigating RF Interference,” Inside GNSS, pp. 54 – 62, January/February 2011
[7]
Young, J. and Lehnert, J., “Analysis of DFTbased Frequency Excision Algorithms for Direct Sequence Spread-Spectrum Communications,” IEEE Transactions on Communications, Vol. 46, No. 8, pp. 1076 –1087, August 1998 

Could Future Gasoline Engines Emit Less CO2 Than Electric Cars?

As reported by Green Car ReportsIs it possible to make a gasoline engine so efficient that it would emit less carbon dioxide per mile than is created by generating electricity to run an electric car over that same mile?


Small Japanese carmaker Mazda says yes.
In an interview published last week with the British magazine Autocar, Mazda claimed that its next generation of SkyActiv engines will be so fuel-efficient that they'll be cleaner to run than electriccars.

That's possible. But as always, the devil is in the details.

Specifically, total emissions of carbon dioxide (CO2) in each case depend on both the test cycles used to determine the cars' emissions and the cleanliness of the electric generating plants used to make the electricity.

In the U.S., the "wells-to-wheels" emissions from running a plug-in electric car 1 mile on even the dirtiest grids in the nation (North Dakota and West Virginia, which burn coal to produce more than 90 percent of their power) equate to those from the best non-hybrid gasoline cars: 35 miles per gallon or more.

The U.S. average for MPG equivalency is far higher, however, and it's roughly three times as high--near 100 mpg--for California, the state expected to buy as many plug-in cars as the next five states combined.
In Europe, however, 35 mpg is a perfectly realistic real-world fuel efficiency for small diesel cars (generally compacts and below). And their official ratings are often higher still.

European test cycles for measuring vehicle emissions (which translate directly to fuel efficiency) are gentler than the adjusted numbers used in the U.S. by the EPA to provide gas-mileage ratings.

On the generation side, some European countries use coal to produce a large proportion of their national electricity. (Some also buy their natural gas from Russia, a supplier that may appear more problematic today than in years past.)

So if Mazda can increase the fuel economy of its next-generation SkyActiv engines by 30 percent in real-world use, as it claims, it's possible that its engines might reach levels approaching 50 mpg or more--without adding pricey hybrid systems.

And those levels would likely be better than the wells-to-wheels carbon profile of an electric car running in a coal-heavy country--Poland, for example.

Mazda will raise its current compression ratio of 14:1 to as much as 18:1 and add elements of homogeneous charge-compression ignition (HCCI) to its new engines.

The HCCI concept uses compression itself to ignite the gas-air mixture--as in a diesel--rather than a spark plug, improving thermal efficiency by as much as 30 percent, though so far only under light loads.
Mazda's next round of SkyActiv engines won't emerge until near the end of the decade, "before 2020." Even its current-generation diesel models still haven't been launched in the U.S.

With rising proportions of renewable sources like wind and solar, and perhaps more natural gas, some European grids will then be cleaner than they are today--making the comparison tougher for Mazda.

But the company's assertion is at least plausible. We'll wait for actual vehicles fitted with the new and even more efficient engines to emerge, and see how they compare to the lastest grid numbers then.

SkyActiv badgeElectric-car advocates may be tempted to pooh-pooh any vehicle with any tailpipe emissions. Or they may point out that electric-car owners in the U.S. appear to have solar panels on their homes at a much higher rate than the country at large--meaning much of their recharging is done with virtually zero carbon emitted.

But every effort to reduce the carbon emissions per mile of the trillions of miles we drive globally every year is a step in the right direction.

Will Mazda lead the march along that path? We look forward to learning more about its next SkyActiv engines.

Boom In Sale Of Telematics Solutions Predicted

Vehicle tracking using OBDII port devices
is becoming more popular due to easy installation
and the ability to track personal vehicles and/or
rental/leased vehicles.
As reported by OutLaw.com:  Sales of telematics devices are expected to increase by more than 1000% over the next five years, according to a new study.

Technology market analysts ABI Research said that 117.8 million 'onboard diagnostics (OBD) aftermarket telematics solutions' are expected to be subscribed to in 2019, up from 9.5m this year.

It said the demand for solutions that plug in to OBD port built into vehicles is strongest in European and North American markets at the moment but that smartphone applications will provide increasing competition within the telematics technology market.
Smartphone HTML5 applications are being
used more commonly to provide detailed
information not only of a vehicle's location
but to gather and process sensor information.
"Beyond the 2019 forecast horizon, the window of opportunity for OBD-dongles will gradually close as open factory-installed OEM (original equipment manufacturer) telematics becomes more widespread," ABI Research said in a statement.
"OBD solutions will also face competition from aftermarket telematics solutions based on smartphones connecting directly to the vehicle OBD port via Bluetooth or Wi-Fi adapters. Even standalone smartphone applications are starting to be explored for applications such as UBI (usage-based insurance) and driver behavior monitoring of truck drivers leveraging the built-in GPS, accelerometer, and connectivity," it said.
'Telematics' is a term most commonly associated with the motor insurance industry; though in general it refers to any system that collects remote or mobile sensor data. Insurance companies are increasingly recording information via devices in cars that allows them to set insurance premiums that reflect the driving style of motorists. Using recorded data, companies are able to pinpoint specific driver or vehicle risks rather than using more generalized area statistics.
Earlier this year competition law expert Natasha Pearman of Pinsent Masons, the law firm behind Out-Law.com, warned insurance companies to be careful to ensure that arrangements governing how telematics data is gathered, managed and accessed do not fall foul of competition and privacy laws.

Tuesday, March 25, 2014

2014: GPS Modernization Stalls

As reported by Inside GNSS: With the optimism of college-bound seniors touring the Ivy League, GPS managers have been weighing options to dramatically change the GPS constellation. Now, after studying the costs, considering the benefits, and assessing the funding climate, officials have made the starkly fiscal decision to stick close to home and take a few extra years to finish.

Although the final decisions will not be made until sometime this spring, proposals for a distinctly different type of GPS constellation appear to be off the table, sources tell Inside GNSS. The plan now appears to forego any major shift in the design of the satellites such as those proposed in Lower Cost Solutions for Providing Global Positioning System (GPS) Capability, an Air Force report delivered to Congress last April. 

The established course of modernization will proceed largely unchanged, say sources, but it will take longer to build and launch the GPS III satellites and add the new signals. Full implementation of the new military M-code, for example, will be pushed back roughly four years at least, noted one source. Those waiting for the new civil signals will also have to be patient. 

The current course of action will likely raise the total cost of the modernized system although the higher costs will be spread out over time in a way that fits more appropriately into budgets constrained by sequestration and the overall post-war downsizing taking hold at the Pentagon. 

“We’ll pay a unit price that’s a little higher, but just like when you buy your car on payments, you pay more for the car but you have cash flow management,” Maj. Gen. Robert McMurry, director for Air Force space acquisition told reporters during a briefing on the military space budget. 

The fiscal year 2015 (FY15) budget McMurry was describing reflects the decisions so far. The FY15 request for money to procure GPS III satellites is just $292.397 million — down dramatically from the White House’s request for $477.598 million in FY14 and roughly half of the $531 million the White House projected it would need for the program just last year. It is less even than the $450.598 million allocated by Congress. 

The administration is also asking $212.571 million for GPS III development, somewhat less than the $221.276 million requested last year and only slightly smaller than the $215 million that was projected to be needed last year. The request is a bit more than the $201.276 million Congress appropriated for FY14. 

The request to procure Block IIF satellites is about the same as last year — $52.09 million — but the new ground control segment faces al reduction. Whether it is a big or small cut depends on how you look at it. The White House is asking for $299.76 million in FY15 for the Next Generation Operational Control System or OCX. That is down sharply from its $383.5 million request last year and the congressionally approved FY14 amount of $373.5 million. 

In the projections that accompany each budget, however, the White House last year only anticipated asking for $303.5 million for FY15. In fact, both the FY14 and FY15 budgets project falling GPS allocations for the next several years. This is despite the fact that the program has been experiencing delays and challenges — the sort of things that normally add to the cost. How these two trends will mesh in the end is unclear. 

Implications 
Where it had planned to buy two GPS IIIs this year, the Air Forces now plans to buy only the ninth in the series and put down money for long lead items on Space Vehicle 10 (SV10). It will then buy just one new satellite next year and three each year for the next three years, according to McMurry. 

The launches will be stretched out a bit as well. Five GPS missions will see their booster procurement moved past 2017, said McMurry. That has implications for Department of Defense (DoD) plans to introduce more competition into the launch procurements. 

“Those five will still be available for competition,” he said. “It’s to be determined how we’ll do that competition. It will be part of the phase two, and they’re working on that strategy.” 

The long life of the existing satellites is making it possible to find savings without undermining the quality of the GPS system, officials said. 

“(The) satellites are living longer than we predicted so we didn’t need to replenish those as fast as we originally planned. And it made no sense to spend that money if we didn’t need the satellites,” said Troy Meink, deputy under secretary of the Air Force for space. 

The Constellation 
Less clear is how many satellites the Air Force plans to have in the constellation over the long run. 
The FY15 budget “reprofiles Global Positioning System, GPS III, to meet constellation sustainment demands,” said Under Secretary of the Air Force Eric Fanning. 

“Sustainment,” however, does not necessarily mean supporting the constellation in its current configuration, which stands at 31 satellites plus spares. While having more satellites is advantageous to those on the ground, it is not strictly necessary according to Air Force mandates. 

“The requirement is 27 satellites to maintain 24,” McMurry told the audience at a March 7 breakfast on Capitol Hill sponsored by the Mitchell Institute. “I think our approach will be to absolutely assure that requirement, which meets that mandated performance, but in doing so I will expect that we’ll, in reality, maintain a slight surplus to that as we move on.” 

“It is clear that they are going to stay with the ‘enhanced 27,’ which is 27+3 for as long as they can,” said a source familiar with the issue. “And even if for some reason those (satellites) that are turned on now drop out, they’ve got as many as five that they can turn back on.” 

Those backup satellites just have “to last until the next launch,” said the source, suggesting it is realistic that the Air Force will be able to keep the number of satellites up even though the spacecraft have been operating far past their design lives. 

The question is whether the next launch, or launches, can happen fast enough. The existing satellites were launched in clusters and are therefore at risk of failing in groups as they age out. Will the Air Force have the satellites it needs and the lift capacity required to deal with a sudden, rapid loss of the older spacecraft? 
It appears that the answer may be “Yes.” 

Dual Launch After All 
Inside GNSS has learned that, even though the Pentagon has slashed the funding for dual launch, plans are in the works to enable the lighter payloads and dual-launch capability needed to rapidly refresh the constellation. 

The United Launch Alliance, a 50/50 joint venture between Lockheed and Boeing, is stepping in to finish the work. “ULA is funding the launch vehicle development work that will enable dual launches of GPS III and other potential satellites, with a planned first launch capability in 2017,” the company said in a written response to a question from Inside GNSS

And the Air Force will continue to fund the technology needed on the satellites to make dual launch possible. 

“We have maintained the development of dual-launch capability within the satellite line,” McMurry said. “The satellite itself will have the dual transponders and radio frequencies in place so that, if you launch two of them together, you could communicate with both of them independently.”

Slimmer Sats in a Pinch
The satellites might still be too heavy to launch two at a time unless some other changes are made. 
To address that, said a source, military managers are planning to build flexibility into the GPS III satellites that will enable the service to drop the Nuclear Detonation Detection System or NDS payload from the GPS III satellite if necessary. 

“They are going to ensure that they have the option to fly GPS III without the NDS,” said the source, adding that, without NDS, “should there be the requirement . . . to rapidly replace the constellation, they will actually be able to launch two.” 

The built-in flexibility also means the GPS program will not have to plan around any delays in the new NDS payload — which is on schedule but still needs a good deal of testing, the source said. 

Dual-launch capability and potentially lighter satellites are a variation on a far more ambitious proposal to develop a stripped-down version of the GPS III spacecraft that would not carry the NDS and provide fewer signals. These smaller satellites would likely have been used in combination with some of the larger, regular GPS spacecraft. 

The austere NavSats or “NibbleSats” were cheaper to build and could have been launched at least two at a time — perhaps even in clutches of three or four — dramatically reducing the cost of maintaining the constellation. But the proposal was set aside after hitting a number of stumbling blocks involving funding and contract management, sources told Inside GNSS

“I think the pressure to reduce the cost of the satellite is very much there,” said an expert. “Obviously anything the Air Force can do to drive the cost of the satellite down — I think they’re still alert to those opportunities. But I think the Congress has made it very difficult for them too create a ‘new start.’”

It is not only hard to get money for new starts (new programs), the rules governing the seemingly endless rounds of Congressional budget extensions mean the program could have become tangled in delays, said Stan Collender, national director of financial communications for Qorvis Communications and an expert on the federal budget. 

“Under a continuing resolution,” said Collender, “new starts are prohibited.” 

Moving to small satellites would also have been such a big change that it likely would have required the Air Force to recompete the GPS III contract. New procurements face a long process fraught with complexities and the uncertainties created whenever new contractors enter a program where legacy satellites have been in place so long that they are frequently referred to jokingly as being “old enough to vote” — that is, have reached 18 years of age. 

The prospect of reopening the contract is why “NavSats, small sats, little sats — that stuff is off the table,” said a source, who has been following the issue. 

Thrifty Innovation
That does not mean the GPS III program will proceed without a few enhancements. 

Col. Bill Cooley, director of the GPS Directorate, said the Air Force is “looking at a design turn” and is examining using better solar panels and traveling wave tube amplifiers or TWTAs (TWEEtas) which could reduce power requirements. 

Changing the batteries is also under consideration, according to sources. One expert pointed out that some of older battery technology is not even available anymore. Another source said that lithium-ion batteries were being considered. 

“The decisions are in the process of being made,” Cooley told Inside GNSS. That decision process may be playing a role in the delay of the first GPS III satellite, which is now not expected to be ready until FY16. 
The design of the first satellite is supposed to be a template of sorts for the rest, and Cooley made it clear to the Mitchell Institute audience that he wanted the design to be “repeatable.” 

Right now, however, interference problems within the satellite’s navigation payload have to be resolved. The problems in the payload, which is being build by Virginia-based Exelis, have already been cited as a reason for delays in the program. 

The GPS IIIs must generate and transmit eight signals — legacy military P-code on L1 and L2 frequencies and civil C/A-code on L1, as well as the new dual-frequency M-code, and civil L1C, L2C, and L5 signals. In order to keep the timing and everything synchronized, there is “one critical box” involved in their generation, said Cooley. 

“There’s a whole bunch of techniques you can use,” Cooley told Mitchell Institute attendees. You can use absorptive material, he said, redesign some of the boards or separate the signals by putting them in separate boxes. “All options are on the table.” 

Sources confirm that the Air Force seems to have workable solutions in hand. Although those solutions may need extensive testing, the potentially two-year delay could also give the program time to work in some of the innovations mentioned previously. Program managers may also just be working hard to get the most out of the new satellites. 

“The GPS IIIs have a design life of 15 years,” said Cooley. “That’s a real challenge — to get 15 years in that harsh environment. We hope that they will last much longer. We hope that we can get a satellite that can vote and drink and all those kinds of things.” 

What the changes, delays, and development problems mean for GPS III prime contractor Lockheed Martin is unclear. 

“Over the next few weeks we will review the budget in detail to understand the specific impacts to our business,” the company said in a written response to Inside GNSS. “We look forward to working with the administration and Congress over the coming months as budget discussions continue.” 

OCX and Civil Money
What the delay in the first GPS III satellite means to Raytheon is that the OCX prime will have more time to work on ground segment modernization that already slipped behind schedule, said McMurry, who attributed slippage in the initial GPS III delivery as the biggest reason for a new delay in the OCX program. 

OCX program managers may get more time still if all the money from the civil side of the GPS program does not come through. 

As noted earlier, the White House scaled back their FY15 defense budget request for OCX to $300 million This is just under the $303.5 million that was projected to be needed this year though the program has been experiencing difficulties. This is the request for the Defense Department. OCX also gets part of its money from the U.S. Department of Transportation (DoT), and a long string of underpayments from DoT has put GPS program managers in a position where they will be forced to reprogram OCX, adding some six months to the schedule and tens of millions to DoT’s bill, according to an expert with knowledge of the issue. 

The White House gave DoT the responsibility for funding those parts of the GPS program needed by civil users, and DoT handed the Federal Aviation Administration (FAA) the actual funding task. 

The FAA has largely failed, however, to persuade Congress to allocate the money for the civil funding. This has forced the agency, which has cost overruns on other programs, to short its payments to DoD for the last several years. 

The FAA is now trying to make up for those too-small payments but “it’s not pretty,” said the source, who spoke on condition of anonymity. 

The Administration’s FY15 budget request for civil funding actually jumped from the $20 million requested last year to $27 million — better, but still a far cry from the $40 to $50 million that was supposed to be allocated each year for five years. Even so, if FAA convinces Congress to approve the whole request it will be a dramatic improvement over the scant $6 million it got for FY14. 

Failure to win over lawmakers could have significant consequences. 

Up to now the Air Force has been able to manage around the budget shortfalls and keep things more or less on track. With sequestration and other cuts coming out of DoD space programs, GPS program managers are no longer in a position to finagle funding for civil capabilities. 

The source told Inside GNSS that, should the FAA fail to secure adequate monies, the OCX program to will have to be stretched out some six months at considerable expense — money that FAA will also be expected to make up. The total bill for what FAA owes plus the added cost of delay would top $100 million said the source — nearly four times the current budget request. 

DoD and FAA will have to work together pretty closely to manage during these austere times, said the source. “When you get cut to the bone you have to work pretty closely just to survive.” 

Sequestration Looms 
The budget crunch could get even worse if sequestration is applied without changes in 2016.
 
Military officials made it clear that they are assuming sequestration will not resume in full in 2016, as is now the law. If it should reemerge, said Fanning, “we would be unable to procure one of the three GPS III satellites planned in FY17.” 

Still unclear is how the launch rate or other aspects of the program would be affected, but it is clear, said Collender, that budget politics mean sequestration is likely to remain a factor. He estimated a 3-out-of-4 chance that the cuts will return in full force in the 2016 budget. 

“It will be a presidential year,” he said. “You’ll have Republicans that don’t want to be blamed for increasing spending; so, spending cuts might be in place. There’s probably a 75 percent chance that the 2016 sequester stays in place as is.” 

That could force still more changes to the GPS program and perhaps even reopen consideration of a GPS III redesign,experts hinted. Asked what options the Air Force was looking at for the long term, Cooley left the door open. 

“What options are we not looking at?” he said.