Реферат: Satellite Atm Networks Essay Research Paper Increasing
Название: Satellite Atm Networks Essay Research Paper Increasing Раздел: Топики по английскому языку Тип: реферат |
Satellite Atm Networks Essay, Research Paper Increasing demand for high-speed reliable network transmission has motivated the design and implementation of new technologies capable of meeting lofty performance standards. A relatively new form of network transmission known as Asynchronous Transfer Mode (ATM) has emerged as a potential technological solution to the ever-increasing bandwidth demand imposed upon networks. ATM technology can be a good choice of network medium for many transmission tasks such as voice, data, video, and multimedia. Coupling ATM network implementations with the benefits associated with satellite networks may prove to be a fruitful merger in the pursuit of faster, more reliable, far-reaching networks. Using ATM network technology over a satellite network medium can provide many advantages over the typical terrestrial-based network system. Some of these advantages include remote area coverage, immunity from earth-bound disasters, and transmission distance insensitivity. Additionally, these networks can offer bandwidth on demand, broadband links, and easy network user addition and deletion. Adapting ATM technology for use over a satellite network is a new concept still in its infancy. Numerous problems such as signal adaptation, network congestion, and error control will have to be overcome in order to produce a fully functioning, viable system resulting from the marriage of these two distinct technologies. Although ATM over satellite networks are not yet commonplace, many military, research, and business entities are expressing interest in their development. It is a promising idea that may be one of the future solutions to the expanding problem of network performance needs. SATELLITE NETWORKS A satellite network is a highly specialized wireless type of transmission and reception system. Satellites send and receive signals to and from the earth in order to facilitate data transfer from various points on the planet. A typical satellite is rocket-launched and placed in a specific type of orbit around the globe. In 1945, scientist and author Arthur C. Clarke first proposed that satellites in orbit around the earth could be used for communication purposes. (The Geosynchronous Earth Orbit described below is sometimes called the Clarke orbit in honor of the author’s suggestion.) The first satellite successfully launched and implemented in space was a Russian artificial satellite, roughly the size of a basketball. This satellite, launched in the late 1950’s, simply transmitted a short Morse code signal repeatedly. Today, there are hundreds of satellites circling the globe serving many diverse purposes such as communications, weather tracking and reporting, military functions, photo and video imaging, and global positioning information. Satellites communicate with the earth by transmitting radio waves between the satellites and earth-bound reception stations. The wavelengths of the radio wave frequencies are determined by the location of the satellite in space as described below. The signals transmitted between the earth and satellites are sent and received by various sized antennas known as satellite dishes, typically located near the earth-station receivers. Signals being sent from Earth to the satellites are referred to as uplinks, whereas the signals emanating from the satellites are called downlinks. The range of coverage area on earth that the frequencies can reach is known as the satellite’s footprint. (See Figure 1 below.) Radio waves transmitted to (and from) satellites usually involve traveling over large distances which creates relatively long propagation delays since the speed of transmission is limited by the speed of light. Figure1. Three commonly used satellite frequency bands are C, Ku, and Ka, with C and Ku being the most frequently used in today’s satellite systems. C-band transmissions occupy the 4 to 8 GHz frequency range, whereas Ku and Ka bands exist in the 11 to 17 GHz, and 20 to 30 GHz frequency ranges respectively. There is a relationship between transmission frequencies, wavelength size, and antennas or dish size. The higher the frequency, the smaller the wavelength and accordingly, the smaller the dish. Conversely, a lower frequency corresponds to a larger wavelength which in turn requires a larger dish. C-band antenna size is generally 2-3 meters in diameter whereas the Ku-band antenna can be as small as 18 inches in diameter. Ku is the band of choice for many home DSS systems in use today. The majority of satellites presently circling the globe today are in Geosynchronous Earth Orbit (GEO). These satellites are positioned at a point 22,238 miles above the earth’s surface. As the Geosynchronous title implies, these satellites circle the globe once every 24 hours, completing one orbit for every earth rotation. In relative view from the earth, the satellites appear to be stationary, remaining in a fixed position. It is for this reason that these satellites are also occasionally called Geostationary Satellites. (There is a difference between the two terms, however. Geosynchronous orbits can be circular or elliptical, whereas Geostationary orbits must be circular and located above the earth’s equator.) This GEO orbit allows the satellite dishes located on the earth to be aimed at the orbiting satellite once, without requiring continual repositioning. A satellite in this type of orbit can provide a coverage footprint equal to 40% of the earth’s surface. Therefore, three evenly spaced Geosynchronous Earth Orbit satellites (120 angular degrees apart) can provide complete transmission coverage for the entire civilized world. In recent years, technological innovations have paved the way for new types of satellite orbits and designs. One of these new orbits is called Medium Earth Orbit. (MEO) Satellites in this type of orbit are located at an altitude of approximately 8,000 miles above the earth. Placing satellites at this level allows for shorter transmission lengths, thereby increasing the strength of the signal and decreasing the transmission delay. This in turn means that the receiving equipment on earth can be smaller, more lightweight, and less expensive. The downside to these altitudes is the smaller footprints provided by MEO satellites as opposed to their GEO counterparts. Another relatively new category of satellite orbits is the Low Earth Orbit (LEO). There are three categories of LEO satellites: Little LEO, Big LEO, and Mega LEO. LEO satellites typically orbit at a distance of only 500 to 1000 miles above the earth. As with the MEO satellites described above, LEO satellites further reduce the transmission delay and equipment expense, maintain a strong signal strength, and project a smaller footprint. ATM NETWORKS A network topology that has gained considerable popularity in the recent past is Asynchronous Transfer Mode (ATM). This network switching technology, also known as Cell Switching, has been embraced by various factions of the network transmission community such as telephone companies, scientific research firms, and the military. ATM was designed to operate on transmission media at speeds of 155Mbps or more, giving it the advantage of good performance. ATM topology is connection-oriented offering a high Quality Of Service (QOS) level. The ATM network technology was originally envisioned as a way to create large public networks for the transmission of data, voice, and video. Additionally, ATM has been subsequently embraced by the LAN community to compete with Ethernet and Gigabit Ethernet. In its simplest form, ATM networks are switched networks that create a connection and path from a sender to one or more receivers for the transmission of fixed size frames known as cells. Cell transport is accomplished using a statistical multiplexing algorithm for transmission decisions, and a technique called Cell Segmentation and Reassembly. Since the majority of frames passed to an ATM network are not the required byte size, the ATM protocol must segment the frames into the proper cell size prior to transmission and subsequently reassembling the ATM cells back into their original state at the receiver. ATM cells are fixed size data packets of 53 bytes each. The ATM cell size is always 5 bytes of header and 48 bytes of payload. Therefore, this technology has the beneficial side-effects of a reduction of complex hardware for the switches themselves, improved frame queueing, and uniform switching activities all taking the same amount of time to accomplish. ATM cells have two possible formats dependent upon where the cell happens to be in the network. The names for these two formats are User-Network Interface (UNI) for the user to network interface, and Netparticle-Network Interface (NNI) for the network to network interface. Figure 2 below shows the format for a UNI version of the ATM cell. The NNI version of the cell differs only in that the Generic Flow Control (GFC) of the UNI version is replaced by 4 additional bits for the Virtual Path Identifier (VPI). As can be seen from Figure 2, the cell starts with the GFC, which is intended to be used to arbitrate access to a link in the event that the local site uses a shared medium to attach to an ATM configuration. Following the GFC are the VPI and Virtual Circuit Identifier (VCI) bits used to identify the path (channel) created for this transmission. Next, we see the Type bits which are used to facilitate management functions and indicate whether or not user data is contained in the cell. The final two fields prior to the 48 byte payload are the Cell Loss Priority (CLP) bits and the Header Error Check (HEC) bits. The CLP establishes a priority value in case the network becomes congested and needs to drop one or more frames. (See below for a discussion on traffic management.) The HEC is used for transmission error checking incorporating the CRC-8 polynomial. (CRC-8 is one of several commonly used polynomials for Cyclic Redundancy Checking within network protocols.) ATM can be run over several different physical media and physical-layer protocols including SONET and FDDI. To adapt ATM to these various media layers and accomplish the necessary segmentation and reassembly of the ATM cells, a protocol layer called AAL (ATM Adaptation Layer) is inserted into the network protocol stack between the ATM layers and other protocol layers desiring ATM use. (See Figure 3 for a typical ATM protocol stack.) ATM OVER SATELLITE To take full advantage of the benefits of both satellite and ATM networks, a network architecture and protocol stack must be implemented that allows communication between these very different technologies. The solution to this problem comes in the form of the key component of this system known as the ATM Satellite Internetworking Unit (ASIU). (This component is also referred to by some as the ATM Adaptation Unit.) The ASIU is the essential piece of equipment for the properly-functioning interface between the satellite and ATM system. It is responsible for handling many complex issues of the system interface such as management and control of system resources, real-time bandwidth allocation, network access control, and call monitoring. In addition, the ASIU must take care of system timing and synchronization control, error control, traffic control, and overall system administrative functions. (See Figure 4.) Figure 4. The ASIU is the single interface that acts as a bridge between the ATM and satellite systems, allowing the desired data to be exchanged back and forth across these two distinctly different types of transmission mediums. As can be seen on both sides of the typical ATM to satellite protocol stack of Figure 5, the ASIU is placed between the last leg of the ATM network and the front of the satellite system equipment. Therefore, the ASIU can perform all of its necessary functions such as data segmentation and reassembly, and the features described above. As mentioned above, the ASIU performs many functions, all contributing to the proper operation of the ATM to satellite interface. Five of the most interesting and important functions of this component are the Cell Transport Method, the Satellite Link Layer, error control, traffic management, and bandwidth management. These five issues will be explored in the following discussions. Figure 5. Cell Transport Methods Cell transport across an ATM over satellite network can make use of an existing digital cell transport format. The three existing cell transport protocols that have been considered for potential use with this type of system are Plesiochronous Digital Hierarchy (PDH), Synchronous Digital Hierarchy (SDH), and Physical Layer Convergence Protocol (PLCP). The most feasible and promising protocol for cell transport within this system turns out to be SDH for the reasons delineated below. The PDH transmission system was originally developed to carry digitized voice efficiently in major urban areas. (PDH however, is being replaced by other transport methods such as SONET and SDH.) In this scheme, the multiplexer at the sending end has access to multiple tributaries for transport that can have varying clock speeds. The multiplexer reads each tributary at the highest allowed clock speed but will also check to see if there are no bits in the input buffer. If this is the case, the multiplexer will use bit-stuffing to move the signal up to a higher clock speed for better performance. The multiplexer subsequently notifies the receiving demultiplexer that the data contains stuffed bits for proper deletion of the bits on the receiving end. The downside to the PDH system is the added overhead and complexity created by performing the redundant ADD and DROP operations required by the bit stuffing. In addition, PDH has difficulty recovering and rerouting signals following a network failure. SDH, on the other hand, is more suited for the ATM to satellite interface since it was originally designed to take advantage of a completely synchronized network. The fiber optic transmission signal typically used for an ATM network transfers a very accurate clock rate throughout the network. The key ingredient for the SDH protocol is the inclusion of pointer bytes that indicate the beginning of the cell payload. This helps avoid any data loss due to bit slippage caused by slight phase and/or frequency variations. SDH has other advantages over PDH since it can handle higher data rates, support easier and less expensive multiplexing and demultiplexing, and has increased provisions for network management. The PLCP transport method was originally designed to carry ATM cells over existing DS3 facilities. The PLCP format consists of 12 ATM cells in a sequential group, with each cell being preceded by 4 overhead bytes. A frame trailer of either 13 or 14 nibbles is appended to the end of the group of 12 cells to facilitate nibble stuffing. Each individual 12 cell and overhead combinations requires a 125 microsecond interval for transmission. Unfortunately, PLCP is susceptible to corruption caused by burst errors that can effect the perceived number of nibbles required for stuffing, resulting in frame misalignment. SDH appears to be the logical choice for cell transport in this type of system. However, an important point to consider when using SDH is the possibility of an incorrect payload pointer. This situation may produce faulty payload extraction, causing previously received cells to be corrupted and necessitate their dismissal. It is imperative for the correct functioning of an SDH-based system to employ techniques capable of spreading out errors and performing enhanced error monitoring activities. (See the discussion on error control below.) Satellite Link Access Access methods typically seen in Local and Metropolitan Area Networks are not suited for use with satellite systems due to the high propagation delays created by the long distances to the satellites. LAN and MAN performance is dependent upon short transmission times whereas satellite systems are effective when utilized at maximum capacity. Therefore, an access method must be used in this system that “keeps the pipe full”. There are presently three basic access methods used in satellite systems. Unfortunately, none of these schemes are optimized for use with ATM technology. These three methods, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), and Demand Assignment Multiple Access (DAMA) can be modified from their present form to a configuration more suited for use in an ATM over satellite implementation. The FDMA access method divides the total available satellite bandwidth into equally sized portions. Each portion is assigned to one earth station for exclusive use by that station. This scheme thus eliminates errors and collisions since there is no signal interference between individual earth stations. In addition, FDMA can be used with smaller antennas. Unfortunately, however, FDMA requires guard bands for signal separation which is not conducive to the goal of maximum capacity usage in the system. (FDMA is also considered to be rather inflexible.) Unlike the subchannel frequency division of FDMA, the conventional TDMA access method divides the bandwidth into time slots. These time slots are usually equal-sized, however, variable time slots or allocation on demand configurations are also possible. Using a round-robin scheme, earth stations each receive the use of the entire bandwidth for a small period of time. This turns out to be a suitably flexible setup for packet traffic. TDMA unfortunately requires a large antenna size and since the time slot synchronization adds complexity to the system, the earth-bound hardware cost is increased. A slight variation of the TDMA access method is the Code-Division Multiple Access (CDMA) technique, also known as spread-spectrum systems. In this scheme, transmissions from the earth stations are spread over the time slots using a unique code identifier. This helps to combat signal jamming. For this reason, this scheme is used frequently by the military. Another variation of the TDMA access method that is projected to be used in most future satellite systems is the Multifrequency Time Division Multiple Access (MF-TDMA). This method extends the single frequency scheme used by conventional TDMA into the use of multiple frequencies that can be shared by all earth stations. MF-TDMA therefore increases bandwidth and reduces antenna size. The third existing satellite link access method is Demand-Assignment Multiple Access (DAMA). This technology allows dynamic allocation of bandwidth based on the needs of the network user. DAMA is suitable for use when communication between satellites is not required to be continuous. This permits the alternation of channels by which the ATM cells are transmitted as opposed to establishing a single channel and maintaining this connection continuously. DAMA can be combined with other access methods such as MF-TDMA or SCPC. Using these separate technologies together will allow the system to take advantage of the benefits of both. For example, as mentioned above, DAMA is suited for non-continuous transmissions. Coupling this with SCPC which is suited for continuous connections can help to achieve greater efficiency in the ATM over satellite network configuration. Error Control A well-known problem facing satellite transmission systems is its susceptibility to burst errors. This characteristic is created by the variations in satellite link attenuation and the use of convolutional coding to compensate for channel noise. ATM systems are also suited for handling random errors in lieu of burst errors. Multiple burst errors in an ATM over satellite system may therefore cause many ATM cells to be discarded during transmission. In order to alleviate this problem, an efficient error checking and/or error correcting mechanism should be in place when implementing this type of system. Implementing an automatic repeat request (ARQ) technique at the link layer of the protocol stack can help to reduce the high error ratio levels created by burst errors. There are three common versions of ARQ used in this situation; stop-and-wait, Go-back-N, and selective-repeat. Most existing ATM over satellite networks use of the Go-back-N scheme. Stop-and-wait is simple but not effective in the satellite environment due to the long propagation delay. Selective-repeat has the benefits of good throughput and error performance but suffers from the disadvantages of sender and receiver complexity and the potential for out-of-order packet reception. Traffic Management Traffic and congestion control are very important issues facing the designers of ATM over satellite networks who wish to maintain a high level of Quality of Service (QOS). The long propagation delays of satellite systems coupled with their limited bandwidths (as opposed to the bandwidths of optical fiber links typical of ATM land-based systems) make the efficient implementation of these control functions imperative. Poor overall system performance caused by the neglect of proper traffic and congestion control mechanisms can serve to make the ATM over satellite network unusable. Three common traffic control techniques used with land-based ATM systems are traffic shaping, connection admission control (CAC), and deliberate (selective) cell dropping. Although these methods work well for the land-based systems, they need to be modified for acceptable use with an ATM over satellite network and to maintain the appropriate QOS. Traffic shaping changes the characteristics of cell streams to improve performance. Some examples of traffic shaping are peak cell rate reduction, burst length limiting, and reduction of Cell Delay Variation (CDV) by positioning cells in time and queue service schemes. The limitation of this scheme in relation to the ATM satellite environment is the inability to dynamically change the traffic parameters during network congestion. CAC is an effective traffic control mechanism when used with systems that experience occasional congestion. This method is a set of actions that the system can take to allow or disallow a network ATM connection to be established based on the amount of network congestion at the present moment. In the ATM satellite system, however, this scheme seems to be effective during the ATM connection phase only. The long propagation delay of the satellite portion of the system precludes this technique from being useful during transmission. If the system faces more than occasional congestion, the performance suffers from the inability to establish ATM connections. The deliberate or selective cell dropping technique is based on the idea of potentially dropping a cell when the network becomes congested. The determining factor concerning which cells are to be dropped is the Cell Loss Priority (CLP) bit contained in the cell. (See Figure 2 for the location of the CLP bit in the ATM cell.) This scheme is not suited for the ATM satellite environment since it can cause many dropped cell retransmissions over long propagation delays thereby hindering overall performance. Two additional schemes proposed for use with ATM over satellite networks are Explicit Forward Congestion Indication (EFCI), also known as Forward Explicit Congestion Notification (FECN), and Backward Explicit Congestion Notification (BECN). EFCI is a technique used to convey congestion notification information from the destination to the source via communication to its peer in the higher protocol layers. The source can therefore take appropriate action to reduce additional traffic through the present channel. The problem with this method is that at least a one-way propagation delay is required to notify the source of the congestion. BECN is a faster mechanism than EFCI since a congested network can use this technique to send congestion information in the reverse direction of the network flow to indicate the problem without requiring peer notification. However, like the EFCI technique, BECN is also subject to long propagation delays if the congestion is occurring at the destination. Bandwidth Management Since bandwidth for satellites is limited, proper bandwidth management in the ATM over satellite system is critical. A substantial degradation of the overall performance of the combined terrestrial and satellite system will severely inhibit its usefulness. Applications requiring high bandwidth allocations are particularly affected by this issue. Bandwidth management is a difficult matter to handle within a satellite network. One possible way to help with this problem is to allocate bandwidth for the channel at the connection setup phase by using a Burst Time Plan (BTP). This traffic assignment scheme is a mapping tool that indicates the position and lengths of bursts in the transmission frame. The BTP restricts the number of ATM cells in bursts or subbursts that each earth station can transmit. The number of Virtual Paths (VP) and Virtual Channels (VC) of the ATM connection can also be restricted by the BTP to help with bandwidth management. SUMMARY The usefulness of a combination ATM and satellite network will be determined by its ability to maintain the QOS of terrestrial-based ATM systems. This will require a seamless integration of the two systems without producing serious performance degradations and/or error increases. The challenge facing designers desiring the combined benefits of the distance advantages of satellites and the speed and reliability of ATM is formidable. Many of the problems and concerns discussed in this document remain unresolved, precluding the worldwide implementation of an ATM over satellite system. As network technology advances, perhaps new schemes and techniques will be developed enabling some of the limiting factors of complexity, cost, and delay to be alleviated. These issues will have to be resolved if ATM technology over a satellite network is to play a significant role in the rapidly evolving information infrastructure. |