Intel’s WiFi/WiMax Card: $50  

Intel has unveiled the prices for its upcoming Echo Peak WiMAX/Wi-Fi card and Shirley Peak Wi-Fi “n” modules with volume shipments for Montevina-based notebooks, says Digitimes. They are expected to begin shipping from the second and third quarters of this year.

According to Intel data, the Echo Peak wireless module will support both WiMAX and 802.11n technologies and will be available in prices ranging from US$43-54, depending on specifications. Meanwhile, the Shirley Peak module will support only 802.11n technology with prices between US$19-30. Montevina-based notebooks will be known as Intel Centrino 2.

The launch of Echo Peak and Shirley Peak wireless modules is expected to accelerate migration to 802.11n products and to push down prices.

With Montevina-based products expecting to account for over 50% of Intel’s shipments of notebook platforms by the fourth quarter of 2008, the penetration rate of 802.11n devices is expected to see a surge in that quarter or in the first half of 2009.

Taiwan-based network-equipment makers, including Asustek, Gemtek and Universal Scientific Industrial (USI), are likely to be the contract makers for the Echo Peak and Shirley Peak modules.

For UMPC and MID devices, Intel may be advancing its next-generation ultra-mobile platform – codenamed “Menlow” – from late 2008 to the first half of 2008, says Digitimes.

[get this widget]

AddThis Social Bookmark Button

Wireless IPv6 Tested  

Government Computer News reports that organizers of the Internet Engineering Task Force’s 71st meeting, last week in Philadelphia, temporarily pulled the plug on all Internet access at the event. The organizers then offered only wireless Internet Protocol version 6 (IPv6 - Wikipedia) for a few hours.

The IETF wanted to demonstrate to the attendees, as well as the rest of the world, that accessing the Internet only by IPv6 was possible.

Throughout the week, the IETF blanketed the working group meeting rooms with WiFi, then switched it over to IPv6 for a few hours. The idea behind the temporary switchover was to see what problems would come up, said project coordinator Leslie Daigle, who is the chief Internet technology officer for the Internet Society.

ArsTechnica reports that network traffic plummeted from some 30Mbps to around 3Mbps as the meeting attendees, who had IPv6 enabled, could now only get IPv6-reachable destinations on the Internet. This search page by Google, for example, only accepts IPv6 connections.

The Office of Management and Budget has mandated that government agencies must have their network backbones IPv6 ready by the end of June (pdf).

California’s MetroNet6, for example, plans to support both wireless and broadband technology so either can be used interchangably. MetroNet6 would support the ability for a command center to be established in an Ad Hoc manner that could communicate with a National Homeland Security Office (using wireless or broadband communications), as well as the National Guard or other U.S. Agencies.

About half of the IETF audience felt that preparing for IPv6 was relatively painless, even if they did encounter a few glitches. During the test, the organizers cut off the IPv4 access and provided IPv6-only access through a 100 Gbps IPv6 backbone.

While users of Microsoft Windows Vista and Linux were ready for IPv6 access, those with Windows XP experienced problems. While XP supported IPv6, its Domain Name Service client could not work with the protocol. Someone came up with the idea to download a copy of BIND (Berkeley Internet Name Domain), so domain names could be looked up locally. This required a last-minute patch to the software, which BIND developer Mark Andrews contributed just an hour before the switchover.

While there were many Apple laptop users in the audience, another problem came up with Macintosh OS X, which could not do Dynamic Host Control Protocol under version 6 (DHCP). DHCP is protocol for assigning out numbers on a network. The organizers set up a site that offered instructions on how to set up their computers for IPv6 communications.

Morgan Sackett, VP of Engineering of VeriLAN Event Services, which provided the wireless network, noted that IPv6, with the copious amount of addresses space, should eliminate the need for DHCP altogether.

Comcast provided a 100Gbps connection while VeriLAN provided a 10Gbps backup and a 100Mbps backup for the backup for the IETF meeting last week.

[get this widget]

AddThis Social Bookmark Button

Australian WiMAX: A Disagreement  

Australia’s first WiMAX operator has branded the technology as a “miserable failure” and has decided to close its WiMAX network which they described as a “disaster,” says, a website sponsored by Telstra, Australia’s incumbant telephone and broadband provider.

Head of Public Policy & Communication for Telstra, Dr Phil Burgess has called on the government “to finally terminate the failed $1 billion OPEL give-away”, a nation-wide WiMAX program.

“Today’s revelation is another nail in the coffin of Opel, providing even more compelling evidence that the government needs to terminate the $1 billion give-away of Australian taxpayers’ money to the failed Optus/Opel consortium.” Dr Phil Burgess said.

Buzz Broadband CEO Garth Freeman “slammed the technology” at an international WiMAX conference in Bangkok. The CEO also said that WiMAX was “mired in opportunistic hype.” Buzz Broadband was the first operator to use the WiMAX technology in Australia. Buzz used 3.5GHz for customer access, 5.8 for backhaul.

Dr Burgess went on to say, “We have been telling this story for months. It’s time to wake-up. The wireless community understands the truth. The market understands the truth. Consumers are learning the truth. The technology community understands the truth. The truth is that WiMAX does not work; a truth that is widely recognised around the world.

Buzz Broadband, which competes against Telstra, used Airspan’s AS.MAX solution to deliver WiMAX service across a 30,000 square kilometer region in Queensland, where they hold 3.4GHz licenses. Buzz says indoor penetration of the fixed (802.16d) system was poor as was reliability and is now moving towards a 1.9 GHz (TD-CDMA) system and trying a “wireless cable modem” technology with a mesh architecture to reach the last mile.

Optus was created to provide competition to then government owned telecommunications company Telecom Australia; now known as Telstra. In June 2007, a joint venture subsidiary of Optus, called OPEL Networks was the sole successful bidder of the government’s plan to bring broadband to large parts of Australia. OPEL Networks received $600 million under the program. The Howard Government later decided to allocate a further $358 million to extend broadband to 99 per cent of Australians (pdf).

Telstra is the largest provider of both local and long distance telephone services in Australia, and provides mobile phone services, dialup, wireless, DSL and cable internet access.

Optus, Australia’s 2nd largest communication company, has brought competition to Australia’s Telstra since the late 1980s.

Telstra claims that Optus WiMAX wrong-headedly duplicates Telstra’s 3G network with an incompatible WiMAX network and is unfairly subsidized by taxpayers. Optus disagrees. It says it provides competition and services that monopolist Telstra doesn’t.

On 18 June 2007 Australian Prime Minister, John Howard announced OPEL, a joint venture between Optus and Elders, and secured $958 million in funding from the Australian Government under the Australia Connected programme.

OPEL is intended to deliver affordable broadband services to rural and regional Australians at metro comparable prices. The OPEL joint-venture paid US$65 million to acquire Austar’s 2.3GHz and 3.5GHz spectrum to set up its national WiMAX voice and internet service., a website sponsored by supporters of the WiMAX-enabled broadband vision say; “Telstra has claimed that any operational or functional split in the company would destroy its share price, hold back infrastructure investments, cost consumers more in the long run, and simply could not be done.”

“Telstra is clearly fearful of losing its monopoly through splitting up the company, and is once again pushing outdated propaganda to persuade the public it can’t be done.”

Start-ups like Personal Broadband (using Arraycomm) and Unwired (using Navini), made early moves in Australian cities like Sydney, pioneering mobile broadband data services before many operators had found a viable business model in other countries, explains Caroline Gabriel. New Zealand’s Woosh Wireless uses the TD-CDMA standard promoted by IP Wireless (now NextWave Wireless).

Now consolidation is taking hold.

Unwired became a takeover target for TV and telephony service provider Seven Network. It was widely expected that the company would also acquire the WiMAX licenses and activities of pay TV operator Austar, which has a spectrum partnership with Unwired. However, Austar instead sold its interests to Opel Ventures, a joint venture of Optus and rural service provider Elders, for AUD 65 million ($568m).

[get this widget]

AddThis Social Bookmark Button

Testing using TTCN-3  

TTCN previously referred to Tree and Tabular Combined Notation. This was understandable because test cases were in tabular formats. They contained many levels of indentation that could be regarded a tree-like structure. With TTCN-3, the abbreviation refers to Testing and Test Control Notation. The focus is on testing and not really how those test cases are written. Yes, we can still write test cases in the old way of TTCN-2 but that’s not the only way.

Figure 1 gives an overview of TTCN-3 [1]. As we can see, test cases can be written directly in TTCN-3 core language (such a concept did not exist in TTCN-2), in tabular format or in graphical format. The standard also allows for newer presentations that could interface with the core language. For example, it’s perfectly valid for someone to write test cases in XML and have a conversion mechanism to the core language. Needless to say, an XML presentation format will remain proprietary with no tool support unless it gets standardized.

Figure 1: TTCN-3 OverviewTTCN-3 Overview

The second fact that becomes obvious from Figure 1 is that the core language interfaces with different other languages. These interfaces facilitate the reuse of existing data types and definitions that might have been defined in those languages. For example, UMTS RRC signalling definitions are in ASN.1. For the test engineer, there is no need to convert such definitions into TTCN-3. Any respectable tool in the market must be able to interface directly to these definitions and handle them seamlessly as part of TTCN-3 core language implementation.


At this point it is appropriate to see what’s the format of TTCN-3 core language. This is nothing more than simple text with well-defined syntax and semantics. The syntax is defined using Backus-Naur Form. What this means is that any text editor can be used to write TTCN-3 test cases. Such test cases are quite different in dynamic behaviour from C or Pascal. Still, it is quite easy for programmers well-versed with procedural languages to get used to TTCN-3 easily. There are many similarities - keywords, data types, variables, control statements, functions, operators, operator precedence, just to name a few.

Looking at the differences between TTCN-2 and TTCN-3, Table 1 illustrates an important point with regard to indentation. In TTCN-2, many levels of indentation lead to poor code readability and excessive scrolling in editors. With each alternative, there is code duplication (S4) which can be solved only if S4 is implemented in a reusable test step. Alternatives in TTCN-3 are more elegantly specified and control flow continues at the same level of indentation. Even the example in Table 1 can be simplied by defining default alternative behaviour earlier.

Table 1: TTCN-2 vs TTCN-3 Statements
TTCN-2 vs TTCN-3 Statements

Having the core language in text also makes it easier to look at differences in a version control system. At run time, it makes debugging at the level of TTCN source a lot easier. This is important for test case developers. I have never known anyone who did any similar debugging at TTCN-2 source. The best I have seen was engineers setting intermediate verdicts at lots of places to ascertain what went wrong and where.

The language is structured in a way that allows high level of flexibility. Test system definition is modular. In fact, an important unit of a test suite is a module which would contain one or more test cases or the control part of a test suite. Concurrency of operation is possible because components can execute in parallel. Of course, execution is serialized at the level of hardware unless the multi-processors are involved. Parameterization is possible just as it was possible in TTCN-2. Concepts of PICS and PIXIT still apply because they are fundamental to any conformance testing.

Test System

Figure 2 represents the test system based on TTCN-3 [2]. The modularity of the design is apparent. Adapters are distinct from the executable. Test management and codecs are distinct entities that interface to the executable. More importantly, interfaces TCI and TRI are standardized so that users have a choice of easily migrating from one tool vendor to another without needing to rewrite the test cases. TTCN-3 Control Interface (TCI) allows for interfacing to codec (TCI-CD) and to test management (TCI-TM). Likewise, TTCN-3 Runtime Interface (TRI) interfaces to the adapters. This interface does the translation between the abstraction in TTCN-3 and the behaviour in runtime.

Figure 2: TTCN-3 Test System
TTCN-3 Test System

The adapters are implemented in ANSI C or Java, which have been included in the standard. TTCN-3 allows for dynamic mapping of communication channels between the TTCN-3 executable and the adapters. This is one more area in which TTCN-3 does it better than TTCN-2 where such mapping was static.

Typical Test Cycle

The following would be present in a typical test cycle:

  • Implement the adapters in a chosen language (done only once per adapter per language of choice)
  • Implement the encoders/decoders in a chosen language (done only once per language of choice)
  • Implement the test cases in TTCN-3 (done only once)
  • Compile the test case and test suite (done only once unless test cases change) - at this stage an executable is formed from the abstract definitions
  • Link with adapters, codecs and test management (varies with tool implementation: may be a static link, runtime loading of library or inter-process communication)
  • Execute the test suite (debug if necessary)
  • Collate test results and correct the IUT (Implementation Under Test) if errors are seen


I have previously used tools from Telelogic but never really liked their GUI. Their tools have generally been the least user-friendly in my opinion. I hear from others who have evaluated their TTCN-3 support that they are now better. Telelogic is not doing just TTCN-3. They do a whole of things and I think their strength in TTCN-3 is not all that obvious.

Recently I evaluated TTWorkbench from Testing Technologies. It’s an excellent tool - easy to install and easy to use. It has good debugging support. It allows for test case writing in graphical format (GFT) and looking at logs in the same format. Naturally it also allows writing of test cases in core language format. The downside of this tool is that I found it to be slow in loading and building test suites. It uses Eclipse IDE.

Next I evaluated OpenTTCN. “Open” refers to openness of its interfaces which conform to open standards. This allows the tool to be integrated easily to other platforms using standardized TCI and TRI. With this focus, the tool claims to conform rigidly to all requirements of TTCN-3 standards. Execution is generally faster than other tools in the market. The company that makes this makes only this. Nearly 14 years of experience has gone into making this product and the execution environment is claimed to be the best. The downside is that the main user interface is primitive command line interface. There is no support for GFT although this is expected to arrive by the end of the year. Likewise, debugging capabilities are in development phase and are expected to be rolled out sometime this year. OpenTTCN also relies on certain free tools such as TRex that is the front-end editor with support for TTCN-3 syntax checking. This too is based on Eclipse.

This is just a sample. There are lots of other tools out there. Some are free with limited capability and others are downright expensive. Some are proprietory. One example in this regard is the General Test Runner (GTR), a tool used in Huawei Technologies.


TTCN-3 is set to become a major language for formal test methodology. WiMAX is using it. SIP tests have been specified in TTCN-3. LTE is starting to use it. Other telecommunications standards are using it as well and its use has split over to other sectors. The automotive sector is embracing it. AutoSAR is using it and these test cases may be available quite soon this year. The official website of TTCN-3 is full of success stories.

It is not just for conformance testing like its predecessor. Its beginning to be used for module testing, development testing, regression testing, reliability testing, performance testing and integration testing. TTCN-3 will work with TTCN-2 for some time to come but for all new test environments it will most likely replace TTCN-2 as the language of choice.

[get this widget]

AddThis Social Bookmark Button

An Overview of OFDM  

OFDM has been the accepted standard for digital TV broadcasting for more than a decade. European DAB and DVB-T standards use OFDM. HIPERLAN 2 standard is also using OFDM techniques and so is the 5 GHz extension of IEEE 802.11 standard. ADSL and VDSL use OFDM. More recently, IEEE 802.16 has standardized OFDM for both Fixed and Mobile WiMAX. The cellular world is not left behind either with the evolving LTE embracing OFDM. What is it about OFDM that makes a compelling case for widespread adoption in new standards?

Inter-symbol Interference (ISI)

One fundamental problem for communication systems is ISI. It is a fact that every transmission channel is time-variant. Two adjacent symbols are likely to experience different channel characteristics including time delays. This is particularly true in wireless channels and mobile terminals communicating in multipath conditions. For low bit rates (narrowband signal), the symbol rate is sufficiently long so that delayed versions of the signal all arrive with the same symbol. They do not spill over to subsequent symbols and therefore there is no ISI. As data rates go up and/or the channel delay increases (wideband signal), ISI starts to occur. Traditionally, this has been overcome by equalization techniques, linear predictive filters and rake receivers. This involves estimating the channel conditions. This works well if the number of symbols to be considered is low. Assuming BPSK, a data rate of 10 Mbps on a channel with a maximum delay of 10 µs would need equalization over 100 symbols. This would be too complex for any receiver.

In HSDPA, data rate is as high as 14.4 Mbps. But this uses QAM16 and therefore the baud rate is not as high. Using a higher level modulation requires better channel conditions and a higher transmit power for correct decoding. HSDPA also uses multicode transmission which means that not all of the data is carried on a single code. The load is distributed on the physical resources thus reducing ISI further. Today the need is for even higher bit rates. A higher modulation scheme such as QAM64 may be employed but this would require higher transmission power. What could be a possible solution for solving the ISI problem at higher bit rates?

Orthogonal Frequency Division Multiplexing (OFDM)

Initial proposals for OFDM were made in the 60s and the 70s. It has taken more than a quarter of a century for this technology to move from the research domain to the industry. The concept of OFDM is quite simple but the practicality of implementing it has many complexities. A single stream of data is split into parallel streams each of which is coded and modulated on to a subcarrier, a term commonly used in OFDM systems. Thus the high bit rates seen before on a single carrier is reduced to lower bit rates on the subcarrier. It is easy to see that ISI will therefore be reduced dramatically.

This sounds too simple. When didn’t we think of this much earlier? Actually, FDM systems have been common for many decades. However, in FDM, the carriers are all independent of each other. There is a guard period in between them and no overlap whatsoever. This works well because in FDM system each carrier carries data meant for a different user or application. FM radio is an FDM system. FDM systems are not ideal for what we want for wideband systems. Using FDM would waste too much bandwidth. This is where OFDM makes sense.

In OFDM, subcarriers overlap. They are orthogonal because the peak of one subcarrier occurs when other subcarriers are at zero. This is achieved by realizing all the subcarriers together using Inverse Fast Fourier Transform (IFFT). The demodulator at the receiver parallel channels from an FFT block. Note that each subcarrier can still be modulated independently. This orthogonality is represented in Figure 1 [1].

Figure 1: OFDM Subcarriers in Frequency Domain
OFDM Subcarriers in Frequency Domain

Ultimately ISI is conquered. Provided that orthogonality is maintained, OFDM systems perform better than single carrier systems particularly in frequency selective channels. Each subcarrier is multiplied by a complex transfer function of the channel and equalising this is quite simple.

Basic Considerations

An OFDM system can experience fades just as any other system. Thus coding is required for all subcarriers. We do get frequency diversity gain because not all subcarriers experience fading at the same time. Thus a combination of coding and interleaving gives us better performance in a fading channel.

Higher performance is achieved by adding more subcarriers but this is not always possible. Adding more subcarriers could lead to random FM noise resulting in a form of time-selective fading. Practical limitations of transceiver equipment and spectrum availability mean than alternatives have to be considered. One alternative is to add a guard band in the time domain to allow for multipath delay spread. Thus, symbols arriving late will not interfere with the subsequent symbols. This guard time is a pure system overhead. The guard time must be designed to be larger than the expected delay spread. Reducing ISI from multipath delay spread thus leads to deciding on the number of subcarriers and the length of the guard period. Frequency-selective fading of the channel is converted to frequency-flat fading on the subcarriers.

Since orthogonality is important for OFDM systems, synchronization in frequency and time must be extremely good. Once orthogonality is lost we experience inter-carrier interference (ICI). This is the interference from one subcarrier to another. There is another reason for ICI. Adding the guard time with no transmission causes problems for IFFT and FFT, which results in ICI. A delayed version of one subcarrier can interfere with another subcarrier in the next symbol period. This is avoided by extending the symbol into the guard period that precedes it. This is known as a cyclic prefix. It ensures that delayed symbols will have integer number of cycles within the FFT integration interval. This removes ICI so long as the delay spread is less than the guard period. We should note that FFT integration period excludes the guard period.

Advanced Techniques

Although subcarriers are orthogonal, a rectangular pulse shaping gives rise to a sinc shape in the frequency domain. Side lobes delay slowly producing out-of-band interference. If frequency synchronization error is significant, this can result in further degradation of performance due to these side lobes. The idea of soft pulse shaping has been studied such as using Gaussian functions. Although signal decays rapidly from the carrier frequency, the problem is that orthogonality is lost. ISI and ICI can occur over a few symbols. Therefore equalization must be performed. There are two advantages - equalization gives diversity gain and soft impulse shaping results in more robustness to synchronization errors. However, diverisy gain be obtained with proper coding and out-of-band interference can be limited by filtering. Thus, the technique of channel estimation and equalization seems unnecessary for OFDM systems [2].

Frame and time synchronization could be achieved using zero blocks (no transmission). Training blocks could be used. Periodic symbols of known patterns could be used. These serve to provide a rough estimate of frame timing. The guard period could be used to provide more exact synchronization. Frequency synchronization is important to minimize ICI. Pilot symbols are used to provide an estimate of offsets and correct for the same. Pilot symbols are used where fast synchronization is needed on short frames. For systems with continuous transmission, synchronization without pilot symbols may be acceptable if there is no hurry to get synchronized.

One of the problems of OFDM is a high peak-to-average ratio. This causes difficulties to power amplifiers. They generally have to be operated at a large backoff to avoid out-of-band interference. If this interference is to be lower than 40 dB below the power density in the OFDM band, an input backoff of more than 7.5 dB is required [2]. Crest factor is defined as the ratio of peak amplitude to RMS amplitude. Crest factor reduction (CFR) techniques exist so that designers are able to use a cheaper PA for the same performance. Some approaches to CFR are described briefly below:

  • Only a subset of OFDM blocks that are below an amplitude threshold are selected for transmission. Symbols outside this set are converted to the suitable set by adding redundancy. These redundant bits could also be used for error correction. In practice, this is method is practical only for a few subcarriers.
  • Each data sequence can be represented in more than one way. The transmitter choses one that minimises the amplitude. The receiver is indicated of the choice.
  • Clipping is another technique. Used with oversampling, it causes out-of-band interference which is generally removed by FIR filters. These filters are needed anyway to remove the side lobes due to rectangular pulse shaping. The filter causes new peaks (passband ripples) but still peak-to-average power ratio is reduced.
  • Correcting functions are applied to the OFDM signal where peaks are seen while keep out-of-band interference to a minimum. If many peaks are to be corrected, then entire signal has to be attenuated and therefore performance cannot be improved beyond a certain limit. A similar correction can be done by using a additive function (rather than multiplicative) with different results.

One of the problems of filtering an OFDM signal is the passband ripple. It is well-known in filter design theory that if we want to minimize this ripple, the number of taps on the filter should be increased. The trade-off is between performance and cost-complexity. A higher ripple leads to higher BER. Ripple has a worse effect in OFDM systems because some subcarriers get amplified and others get attenuated. One way to combat this is to equalize the SNR across all subcarriers using what is called digital pre-distortion (DPD). Applying DPD before filtering increases the signal power and hence out-of-band interference. The latter must be limited by using a higher attenuation outside the passband as compared to a system without predistortion. The sequence of operations at the transmitter would be as represented in Figure 2.

Figure 2: Typical OFDM Transmitter Chain
Typical OFDM Transmitter Chain

[get this widget]

AddThis Social Bookmark Button

WiMAXed Eee PC 3rd Quarter  

Laptop Magazine scored an hour-long interview with Asus CEO Jerry Shen, and while most of the details they got out of him were already unveiled at CeBIT this week, they did manage to squeeze a few interesting nuggets out of him:

  • The US pricing for the Eee PC 900 will be around $499 at launch, with plans to drop the price within a few months
  • While initial reports have suggested that the Windows XP model will sport a 8GB of flash memory and the Linux version will have 12GB, Shen says the Linux model might have as much as 20GB of storage
  • Asus is looking into offering a hard drive option, but any units the company releases between now and June will have SSD only
  • Asus is not abandoning its custom Xandros operating system
  • Units with built-in WiMax and HSDPA could be released in Q3 2008
  • Future models could use Intel's Diamondville processor
  • More color options are coming in a few months

[get this widget]

AddThis Social Bookmark Button

WiMAX: No Satellite Interference says WARC  

WiMax Antennas May Interfere with Satellites, says DSL Reports, a story repeated by Om Malik and Crunch Gear today, after Engadget ran a story saying as much. They point to a Satellite Users Interference Reduction Group study that found sharing Fixed Satellite Services (FSS) with WiMAX services on the C-band spectrum (pdf) posed a significant interference threat to satellite signals transmitted in the C-band frequency.

It’s hardly news. Six months ago, WARC-07 ruled that WiMAX and other services can’t share satellite “C” band frequencies. The World Administrative Radio Council last year looked into whether part of C-band satellite spectrum could be shared by services like WiMAX — but ruled against it.

A typical “C band” satellite uses 3.7–4.2 GHz for downlink, and 5.925–6.425 GHz for uplink, according to Wikipedia. It is adjacent to the 3.5 Mhz band favored by fixed WiMAX, world-wide.

The ITU World Radiocommunication Conference (WRC) 2007, was held 22 October-1 November in Geneva. To keep their frequencies “pristine”, the satellite industry showed that WiMAX could interfere with a satellite digital signal more than 7 miles away — if WiMAX shared the satellite “C band”.

So WRC preserved the C-band for exclusive use by satellite operators last year. Case closed.

“The WRC-07 outcome was everything that the industry could have desired,” says Robert Bell, executive director, World Teleport Association (pdf).

The FCC opened access to the 3650-3700 MHz band (3650 MHz) in the United States. The hybrid regulatory model provides for nationwide, non-exclusive licensing of terrestrial operations.

In [real] space news, SES AMERICOM’s AMC 14, a communications satellite to be used by Echostar for direct, local-to-local HDTV ended in failure early Saturday after the launch of the Russian Proton’ upper stage booster suffered a glitch, dumping the DISH Network satellite in a useless orbit. The satellite “can be controlled but is in an orbit of 28,000 kilometres instead of the planned 36,000 kilometres“.

The Lockheed Martin A2100-based AMC-14 also carries an active phased array that can be reshaped on orbit. Lockheed Martin’s A2100 platform is also utilized in the DOD’s Advanced Extremely High Frequency, Mobile User Objective System (right) and Transformational Satellite program (TSAT).

Upcoming Satellite launches:

[get this widget]

AddThis Social Bookmark Button

Cellular Power: Backup or Not?  

The FCC is now requiring telecom and wireless companies to provide 8 hours of backup power for cell sites and remote telecom facilities. Several cell phone companies opposed the FCC’s backup power regulations, claiming it would present a huge economic and bureaucratic burden.

There are almost 210,000 cell towers and roof-mounted cell sites across the country, and carriers have said many would require some modification. At least one industry estimate puts the per-site price tag at up to US$15,000.

Sprint Nextel wrote that the rules would lead to “staggering and irreparable harm” for the company. The cost couldn’t be recouped through legal action or passed on to consumers, it said.

Jackie McCarthy, director of governmental affairs for PCIA, The Wireless Infrastructure Association, said the government should allow the industry to decide how best to keep its networks running (pdf), pointing out that all the backup power in the world won’t help a cell tower destroyed by wind or wildfires.

Wireless carriers also are claiming the FCC failed to follow federal guidelines for creating new mandates and went far beyond its authority.

“We find that the benefits of ensuring sufficient emergency backup power, especially in times of crisis involving possible loss of life or injury, outweighs the fact that carriers may have to spend resources, perhaps even significant resources, to comply with the rule,” the agency said in a regulatory filing.

A panel of experts appointed by the FCC following Katrina was critical of how communications networks performed during and after the storm (pdf). Panel members recommended the FCC work with telecommunications companies to make their networks more robust. Regulators then created the eight-hour mandate, exempting carriers with fewer than 500,000 subscribers.

Miles Schreiner, director of national operations planning for T-Mobile, said it can take 1,500 pounds or more of batteries to provide eight hours of backup energy in areas with a lot of cell phone traffic.

“In urban areas, most of the sites are on rooftops and those sites weren’t built to hold that much weight,” Schreiner said.

The agency agreed in October that it would exempt cell sites from the rules but only if the wireless carrier provided paperwork proving the exemption was necessary.

It would give companies six months from when the rules went into effect to submit those reports and then another six months to either bring the sites into compliance or explain how they would provide backup service to those areas.

[get this widget]

AddThis Social Bookmark Button

LG shows a KS20 clone with WiMAX  

Looks like a KS20, does it not? Ah, but looks can be deceiving! Gearlog says this bad boy has been gutted to use WiMAX in addition to GSM, a combo that won't likely be welcome on Sprint's XOHM network. Here's where it gets interesting, though: an LG rep went on record saying that it would be a pretty trivial matter to swap out the GSM silicon for CDMA, which would make Sprint far warmer to a hookup. The same cat went on to say that they'll be doing seamless handoff between WiMAX and GSM / CDMA networks, which is going to be a pretty critical feature as XOHM builds out

[get this widget]

AddThis Social Bookmark Button

700MHz: It’s Done!  

It’s done. The 700 MHz spectrum auction wrapped up this afternoon, at the Federal Communications Commission, having raised $19.592 billon for the U.S. Treasury, reports RCR Wireless News. Blog Runner and the Wall Street Journal have more.

The FCC is expected to release the names of license winners within 10 days after the close of the auction.

FCC Chairman Kevin Martin said he sent an order to the other FCC commissioners to “delink” the failed D Block from the 700 MHz auction so the auction can be officially closed. Once the commission approves the move, the names of the 700 MHz winners can be released “almost immediately.”

Block Frequencies (MHz) Bandwidth Pairing Geographic Area Type

No. of Licenses

A 698-704, 728-734 12 MHz 2 x 6 MHz EA 176
B 704-710, 734-740 12 MHz 2 x 6 MHz CMA 734
E 722-728 6 MHz unpaired EA 176
C 746-757, 776-787 22 MHz 2 x 11 MHz REAG 12
D 758-763, 788-793 10 MHz 2 x 5 MHz Nationwide 1*
*Subject to conditions respecting a public/private partnership.

The anonymous bidding technique was intended to prevent anti-competitive activity during the auction.


  • The C Block carried a $4.6 billion reserve price that was surpassed during round 17. That triggered the spectrum’s open-access provision that required handsets on the band to be able to use third party applications. The open-access provision was championed by Google, but many analysts believe it was Verizon Wireless or AT&T Mobility that actually won the C-Block spectrum.

    Bidders had the option of chasing a nationwide package of eight C-Block licenses. But it ended up split into 12 regional blocks. Thus, there is likely more than one C-Block winner.

    The 12 regional C-Block licenses generated some of the largest individual bids during the 700 MHz auction, with the licenses covering the Mississippi Valley region generating a provisionally winning bid of $1.6 billion.

  • The “D” block, which was supposed to provide public safety access if its $1.3 billion reserve price was met, failed to attract any bids beyond a $472 million opening bid. Frontline Wireless, which planned to bid, did not raise enough investor interest and shuttered its doors just before the auction.

    The FCC now has to decide whether to re-auction the “D” block with a lower reserve price or alter the buildout requirements. House telecom subcommittee Chairman Edward Markey (D-Mass.) said he plans to hold a hearing to discuss results from the auction, including plans for the D Block.

  • The “A” and “B” blocks are smaller (2×6 Mhz) chunks in the lower 700 mHz band. They are less ideal since they’re close to the 50,000 watt MediaFLO powerhouse (on channel 55) and the “E” block on channel 56.
  • The “E” block is a non-paired 6 MHz channel, probably destined for mobile television.

Of the 1,099 licenses up for auction, eight remained without a bid: A-Block licenses covering Lubbock, Texas, and Wheeling, W.Va., and B-Block licenses covering Bismarck, Fargo and Grand Forks, N.D.; Lee, Va.; Yancey, N.C.; and Clarendon, S.C.

[get this widget]

AddThis Social Bookmark Button

Motorola rolls out Wave 2-ready WiMAX PC Card and desktop unit  

Motorola's already made some moves in advance of the big Mobile World Congress going down in Barcelona next week, but it looks like the company still has plenty more up its sleeve, with it now announcing a new Wave 2-ready WiMAX PC Card, along with a desktop unit for those less concerned with mobility. Likely of primary interest to most, the PCCw 200 PC card supports both 2.5GHz and 3.5GHz to keep you connected 'round the globe, and is of course fully compliant with the IEEE 802.16e-2005 standard. The desktop-bound CPEi 750 (pictured after the break), on the other hand, is available in your choice of 2.5GHz or 3.5GHz configurations, and includes two VoIP/ATA ports to accommodate your various devices. No word on a price for either one just yet, but you can expect the PC card to hit sometime in the second quarter of this year, with the desktop unit slated for "mid-2008."

[get this widget]

AddThis Social Bookmark Button

Intel and Nokia working on seamless WiFi / WiMAX switchoffs  

We've seen a lot of research and even some products that promise seamless WiFi / cell roaming, but Intel and Nokia are cooking up tech that might one day bring us true uninterrupted broadband connectivity, based on automatic undetectable switchovers from WiFi to WiMAX. Intel's posted up a brief video demoing the tech auto-switching without interrupting a video conferencing session on a laptop, but it's easy to imagine the potential application on a mobile phone or UMPC -- dare to dream after the break.

[get this widget]

AddThis Social Bookmark Button

Video: Intel's WiMax Segway takes geek, extreme  

Pocket protector; check. Horn-rimmed glasses and button down collar; check. Now, if only you could find the perfect vehicle to transport your geek-ass around Mobile World Congress. Enter Intel's nerd-edition Segway. These gyroscopic rollers feature a built-in streaming camera, laptop with WiMAX, and giant flag so that aged jocks can hunt you down in the crowds. Not that you'd be too far from the Android prototype booths anyway, will ya? Video with appropriately dubbed circus music after the break.

[get this widget]

AddThis Social Bookmark Button

Samsung's SWT-W100K WiBro PMP gets official, priced  

We had the chance to get hands-on with Samsung's WiBro-lovin' SWT-W100k back at CES in January. Judging by the arrival of the product waifs, the 4.3-inch, WVGA touchscreen PMP now looks to be getting an official coming-out party in its native S.Korea. €341 takes the little all-purpose device with GPS, 2 megapixel camera, Bluetooth, 8GB of internal flash, and DMB mobile television home on a yet to be determined date. VoIP client, personal organizer, and web browser? Sure, that too. No word on the processor choice but it's definitely not running any flavor of Microsoft OS. With any luck, Samsung will bring a US-specced variant capable of running on Sprint's XOHM service later this year. Video refresher posted after the break.

[get this widget]

AddThis Social Bookmark Button


You already know the drill by now — rumor alert — but we can’t hold back from posting this juicy bit of information we just got. We’ve heard the Nokia N810 with WiMAX (we heard it actually is still called the N810, not the N830) is going to launch April 1st. What day is April 1st, you’re asking? The start of CTIA, peoples! Again, we’ve yet to confirm this ourselves, but figured we’d give y’all a heads up.

[get this widget]

AddThis Social Bookmark Button

Clearwire confirms first mobile WiMAX markets  

Clearwire will launch mobile WiMAX in four US markets by year-end, the company confirmed in a conference call last week to announce fourth quarter results.

Clearwire CEO Ben Wolff stated that in 2008 Clearwire would launch all its planned new markets using standards-based 802.16e mobile WiMAX infrastructure. Those markets will be Portland (Oregon), Atlanta, Las Vegas and Grand Rapids, Michigan and each of them will feature "VoIP services from day one". All of those markets are in the US top 50 and Atlanta is a top top-10 market.

Following up its mobile WiMAX trials in the city, the company expects to soft-launch services in Portland by the middle of the year and then light up the other three markets by year-end, said Wolff. "We will thoroughly stress -test the platform in Portland and once satisfied we will launch in other markets before launching more widely," he said.

However, Wolff confirmed that Clearwire is scaling back its capex plans for the year. When Clearwire and Sprint Nextel were jointly planning to cover 100 million pops with WiMAX by the end of 2008, Clearwire's coverage obligations under the deal were 30 million new pops.

Now, after the original deal with Sprint fell through, Clearwire has an "expectation" of rolling out WiMAX in markets that will cover just six million people in 2008, according to Wolff. He added that as of the beginning of 2008 Clearwire had "more than 36 million POPs in various stages of design, development and construction" but that the eventual construction and launch of these networks depended on the "availability of required capital".

"If we are not able to attract the required capital we can further modulate network development to match our financial resources," Wolff said. CFO John Butler added that depending on the financial circumstances the 36 million POPs planned could be moved into 2008 or held until 2009.

Of course, that capital might yet become more readily available if a revamped deal with Sprint Nextel can be arranged. Wolff did not comment directly on the recent speculation surrounding a new Clearwire-Sprint deal although he did state: "progress is being made between our companies on several fronts and I hope to have something more definitive to share soon."

In 2007, Clearwire spent some $369m on capex, adding over 1000 cell sites in existing and new markets, almost doubling the size of its network to cover 16.3 million people. It gathered 188,000 new subscribers, almost doubling its subscriber base to 394,000 and increasing revenues from $67.6m to $151.4m.

Wolff said that the subscriber base of nearly 400,000 households provided a foundation of almost one million people who could form "the basis for an evolution from house-based to meaningful mobile services."

In 2008, however, the company expects to reduce capex to $275-290m, said Butler, who broke down the expected expenditure further.

"In new markets in 2008 and a little of 2009 that should run to $150m or so, then for capacity and coverage of older markets perhaps $30m, and then CPE/residential gateways should cost about $25m, and international markets about 10-15 per cent of total capex," he said.

Butler said that he expected a typical mobile WiMAX cellsite to cost $120,000 and that would cover 2600-2800 households or 6000-7000 people.

Reporting on the ongoing WiMAX trials in Portland, Wolff said that the two operational sites in the city were "exceeding expectations [and] able to demonstrate a seamless user experience across multiple-sized devices." In 2008, Clearwire expects to introduce WiMAX-enabled express cards and mobile WiMAX PC cards, modems, USB devices and embedded chipsets in PCs.

Wolff said that the Portland network was consistently achieving network speeds of 5-6Mbps in the downlink and 2-3Mbps in the uplink. "We have been streaming music, moving at 60mph with 27 seamless handoffs in 20km," he reported.

[get this widget]

AddThis Social Bookmark Button

AAA is the key to mobile WiMAX  

The development of mobile WiMAX network technology is gathering pace, and it brings with it specific challenges, not the least of which is the need for a far superior form of authentication, authorization and accounting (AAA).

AAA is a vital function of any IP telecoms network, since it is the means by which customers' identities are validated, their access to specific services and levels of service is authorized and charging information is prepared.

The AAA server does not perform such functions in isolation, however, relying on interfacing and exchanging information - including policy-management, customer-profile and network-inventory data - with a number of other network elements.

Adding to the complexity is the need to validate the device being used to access the network. Some of the pressure to improve AAA functionality is coming from the diversity of end-user devices, and the problem of developing a robust AAA function in mobile WiMAX networks is exacerbated by the lack of a common standard for interoperability among different authentication methods in fixed-WiMAX networks.

With mobile WiMAX, users can roam across networks, not all of which would use the same authentication protocols, giving rise to the need for a common standard that is also backward-compatible with a range of authentication techniques.

"Fixed WiMAX, as well as Wi-Fi, can use RADIUS [remote authentication dial-in user service] AAA, extensible authentication protocol or a custom authentication method," Tyler Nelson, vice president of business development and marketing at Bridgewater Systems, told Informa Telecoms & Media. "With fixed WiMAX, authorization is carried out in ways similar to those used for common Wi-Fi deployments. However, given that there is currently no interoperability specification governing the use of AAA in fixed WiMAX, vendors have implemented their own custom specifications."

He added that network operators are able to take a proprietary approach because although end-users log on to different access points, they essentially use the same network, using devices approved by the operator. But that is not the case in a roaming situation.

"One of the crucial points about deploying AAA in a mobile WiMAX environment is key management, and this is done very differently in mobile WiMAX than it is other networks," Tyler said. He added that access keys - used in the process of authentication - are derived and distributed differently.

When the customer roams to another network, not only do the original keys have to be passed on to that network, but its authorization requirements have to be read and accommodated by the home network.

Another major difference is that the AAA server in a mobile WiMAX network must be able to remember key sequence numbers and previously generated keys. A typical AAA server does not need to maintain such knowledge, because keys are used only once.

One other important function of the AAA server is maintaining quality of service. Unlike the simplistic QoS support used in Wi-Fi, which largely relates only to specifying bandwidth rates, the AAA server in WiMAX must also be able to provide QoS parameters to various network elements, which are set up as part of the user profile during network authentication and authorization.

A customer could be subscribed to a number of different services, some of which - such as streaming video - might have specified bandwidth speeds, while others might be only best-effort services.

The AAA server needs to be able to differentiate among such services and pass the authorization back to the network so that the appropriate provisioning can be made. WiMAX's support for multiple traffic flows with different QoS characteristics enables efficient traffic management and segregation, which in turn enables the provision of service tiers and individual services, such as VoIP and video calling.

WiMAX is working to standardize these functions and is considering including specifications for AAA support for fixed and nomadic WiMAX in future versions of the standard. In the WiMAX Forum's NWG Stage 3 release 1.0.0 specification, AAA is specified as a basic building block, but the specification also includes some functions that are not typically supported in other AAA deployments, such as Wi-Fi. This version of the standard is focused on the use of AAA in mobile WiMAX, including support for mobile IP.

Bridgewater has had a carrier-grade AAA server on the market for over a year and says it is seeing increasing interest in the product. It has signed channel partnerships with Nortel, Alvarion and, most recently, Motorola.

Bridgewater announced at last month's Mobile World Congress in Barcelona that its AAA Service Controller had been upgraded to support 3GPP-compliant FMC deployments based on UMA, VCC and WLAN architectures, effectively making it access-network-technology-agnostic. Aptilo Networks also has on the market an AAA service controller, which it recently announced was compliant with WiChorus' Intelligent ASN Gateway, which also works with the Bridgewater product.

Bridgewater and Aptilo appear to be the only companies in the market offering AAA products for WiMAX, which is perhaps a sign of the complexity of the problems such products are designed to address. But given the importance of a robust AAA server in mobile-WiMAX-network architecture, that situation is likely to change very soon.

[get this widget]

AddThis Social Bookmark Button

Transmit Power Control, TPC ( for discussion)  

A WiMax mobile station may use TPC for ensuring link quality and the satisfactory reception of the signal at the base station. It is used to maximize the usable modulation level, which achieves the highest throughput, while at the same time controlling interference to adjacent cells by reducing unnecessary transmitting power. However, the importance of TPC in WiMax mobile standard slightly reduces due to the some advance technologies like Adaptive Modulation and Coding (AMC), Dynamic Link Adaptation (DLA), OFDMA, subcarriers permutation and fractional frequency reuse. Since in WiMax, the users are assigned, few sub-channels, a small fraction of the channel bandwidth therefore the potential cell edge interference problem can easily be addressed by using such sub-channel segmentation and permutation zone instead of TPC.
The mobile unit located at the cell edge is typically vulnerable as it is operating within the sensitivity level, in which case the mobile terminal will transmit at full power so consequently TPC in such location is not applicable. However, if the mobile terminal comes closer to the base station the received downlink SNR will increase and reciprocally the received uplink SNR at the base station will also increase. In this situation, WiMax mobile terminal can reduce its unnecessary transmitting power in different ways: to switch a high order modulation and low order coding rate, or to remain at the same modulation and coding but to decrease the number of sub-carriers through DLA.

It is to my personal experience that a beamforming / adaptive array / smart antenna array ... BS also work the output power. The receivers, most likely all, will not have type of beamforming or smart antenna but will adjust the RF power to the antenna. This could go in negative db gain, ie. -2db ( TX attenuation ) of RF power or could go to a full power to say 30db.
Alcatel - Lucent is an OEM product from Navini, now Cisco and they incorporate those functions as a standard in all BS.

Receiver side: You means the receiver transmit (output) power ranging from -2 dB to 30 dB. Yes, according to the specification this could be right but I am thinking more realistic one. In situation, the receiver can adjust below 20 dB ( let say 18, 15 dB etc). My understanding was as Mobile supports DLA, AMC, OFDMA it is always possible to adjust the link by reducing the number of subcarriers or increasing the number of subcarriers. The trasnmit power is calculated based on per MHz.
What is the typically maximum output power at WiMax mobile terminal (receiver)?

My personal guess, it's more easy and functional to do it on the subscriber ( Changing modulation ), but then again, if for any reason the subscriber signal is not receiving the BS signal in a decent way, I would change the OFDMA QAM to something lower of even QPSK modulation. Power output would also be adjusted. This is where it get complicate. If for any reason you have one weak subscriber, it will affect with out a doubt the total thought put of that BS. Let me be more specific, If the BS sector has capacity for 33Mbps and with in that sector it has a weak subscriber, the base will change modulation, just for that subscriber, the base will transmit the packets and then switch back to, say QAM64. The modulation change for what ever short time it did, say milliseconds, lowered the total thought put of the Base Sector. Then changing the modulation type QAM64, QAM16 or QPSK , you are then changing the total number of subcarriers. This would be in a TDD. The scenario would be different on a FDD.

To answer the question on output power of the subscriber, I would say most will do less than 1 watt 30dBm, AirSpan MiMax 22dBM, ZyXel MAX-200M1 27dBm, Navini RipWave 30dBm, Telsima 3100 20dBm, .... They are all adjustable according to WiMax Specs.

Yes, this is also my understaidng, assuming TDD, transmit power 22 dBm ( to observe the cases the subscriber will reduce the power below 22dBm)

----if a subscriber is located at cell egde or near cell edge or received weak downlink signal, in that case, the subscriber will transmit at full power ( let say 23 dBm) with lower oder modulation ( say QPSK) in order to reduce the sensitivity level.
---- if a subscriber is located at the middle of the cell ( adequate received DL signal), the subscriber switch to higher modulation ( 16 QAM) and transmit at full power ( 22 dBm) [as the sensitivity level will be reduced then therefore no need to reduce the transmit power]- in that case slightly power adjustment may necessary ( 0-3 dB)
- If a subscriber with 16 QAM or 64QAM is located near the base station ( strong received power), then subscriber need to reduce the power becuase it is already using higher order modulation, in that case, it can reduce number of sub-carriers ( bandwidth) in order to adjust the received power. Then trasnmit again at full power

So, power reducing is not so optimistic for WIMax Capacity or networks

How TDD is better than FDD? My undersatding TDD is better than FDD, however, there are some countries may require FDD. But technical point of view, i see TDD is much better than FDD. TDD has one main disadvantage is that it needs proper system-wide syncronization to counter interference.

Speed is the main disadvantage of TDD, Time. If you have the license, spectrum, and money is no object, then with out a doubt I would choose FDD.
All aspects of the payload must very synchronized on TDD. Reusing the same spectrum to transmit different packets at the same time could be done on a TDD but you loose time, time is proportional to speed in networking. If you are into voice, telephony and you have the spectrum use FDD. All real commercial license in the USA are blocks of two pairs, cellular and now data, because of several reason.
How can you have wire line speeds in TDD, .1usec ( 0.1 micro seconds)? You can switch all you want between TX and RX, but the fact is, If it's good switching time, It will always be better in a real full duplex world.
Now, don't get me wrong, the combination of TDD and FDD is great. Satellite data transmission uses a similar type, but because the stationary satellite is so far away it's hard to se the beauty of it. Digital cellular/mobile devices uses something similar, but it's aimed 99% for voice.
I will always say, "If you have the spectrum, and the standard supports it, use FDD", you will get more packets per second".
For the internet world where one downloads more data than it's uploaded then the second block could use a combination of TDD and FDD, but I don't think that's in the new FDD mobil WiMax standard!
Back to the original subject, in a TDD environment, output power must be taken under good consideration in two way TX/RX. In a FDD, from the base to the subscriber, the base could be screaming out and attenuation in the receiver could fix that, but it get more complicated than that and it's a new complex discussion for the RF engineers.
The only problem with FDD at the moment is cost and complexity.

Yes, You are also right from time point of view. However, I should also consider the FDD as FDD already approved in Mobile WiMax. Could you please help me to provide some information concerning the FDD channel bands and UL and DL separations. This there any data where I can find this information.

Actually, I will detect the UL signal ( i will know the UL frequency) and at the same time without detecting the downlink signal I want to know the DL frequency. For TDD is very easy as same channel bandwidth. But how can i preditc DL frequenc by FDD? Do you have any idea? or any relation between DL and UL frequency band/ separation in FDD?. I know it will vary from country to country?

I was going to answer "Look at the block assignment" but like you said, It will vary from country to country. Best practice is to obtain the country regulation for the spectrum you are checking. This could be a challenge in 3rd world country and consider "TOP SECRET"! yeah, right!
In practice get your analyzer, locate a BS, set it to the band, max hold, and wait! This will give you the DL block. Check users (SM), to know who's who.
I've done this, one can actually see the output power control that the devices have. But unfortunately I've done this on a TDD block, I'll wait for the FDD equipment and check it out.

[get this widget]

AddThis Social Bookmark Button

BBC Apologizes for Misleading Wi-Fi Scare  

The BBC has publicly apologised for a report in their documentary series Panorama in which they broadcast misleading facts on the risks of Wi-Fi broadband, reports Broadband Genie.

The Panorama investigation (video) claimed that radiation levels in some schools were up to three times the level found in the main beam of intensity from mobile phone masts. The program included a request that Wi-Fi technology should be tested under the same schemes as mobile phone masts.

During the program, three scientists were shown expressing concern over the possible health effects of using Wi-Fi in schools.

However, only one interview was conducted with a scientist who defended the use of Wi-Fi. The interview with this independent Wi-Fi supporter was deemed to be “unfair” compared to the treatment of the other scientists.

Panorama featured spaghetti harvesting from trees, back in April, 1957 (video).

The BBC’s Editorial Complaints Unit (ECU) condemned Panorama for a poorly balanced report, which gave a “misleading impression of the state of scientific opinion on the issue”.

[get this widget]

AddThis Social Bookmark Button