Tuesday, May 26, 2020

Information Technology Dissertations - Internet Media - Free Essay Example

Sample details Pages: 25 Words: 7527 Downloads: 3 Date added: 2017/06/26 Category Internet Essay Type Narrative essay Did you like this example? Chapter 2: Literature Review 2.1 Introduction Multimedia streaming over internet is getting its revolutionary in the communication, entertainment and interactive game industries. The web now becomes a popular medium for video streaming since the user does not have to wait to download a large file before seeing the video or hearing the sound. Instead, the media is sent in a continuous stream and is played as it arrives. Don’t waste time! Our writers will create an original "Information Technology Dissertations Internet Media" essay for you Create order It can integrate all other media formats such as text, video, audio, images and even live radio and TV broadcasts can all be integrated and delivered through a single medium. These applications may require in terms of bandwidth, latency and reliability than traditional data applications to support the growth of multimedia technology in the future [1]. The transportation of multimedia traffic over networks become more complicated because multimedia is becoming cheaper and cheaper and therefore used more and more. Problems with bearing multimedia flows on networks are mainly related to the bandwidth they require and to the strict maximum delay requirements that must be met [2]. This is important when multimedia applications have to provide users with real-time interaction. Because of the rapid growth of Internet usage and the requirement of different applications, the IPv4 is no more relevant to support the future networks. Many new devices, such as mobile phones, require an IP a ddress to connect to the Internet. Thus, there is a need for a new protocol that would provide new services. To overcome to these problems, a new version of Internet Protocol has been introduced. This is called Internet Protocol next generation (IPng or IPv6), which is designed by the IETF [3] to replace the current version Internet Protocol, IP Version 4 (IPv4). IPv6 is designed to solve the problems of IPv4. It does so by creating a new version of the protocol which serves the function of IPv4, but without the same limitations of IPv4. IPv6 is not totally different from IPv4. The differences between IPv6 and IPv4 are including in five major areas which is addressing, routing, security, configuration and support for mobile devices [4]. Like all the development and new inventions, the problems of current Internet Protocol made researcher to develop some new techniques to solve these problems. Even they have tried to make some changes on the current protocol, these changes still didn t help a much. So, at the end the way came to development of a new protocol which is known as IPv6 or IPng. 2.2 OSI 7 Layer Computer networks are complex dynamic systems and difficult task to understand, design, and implement a computer network. Networking protocols need to be established for low level computer communication up to how application programs communicate. Each step in this protocol is called a layer and divided into several layers simplifies the solution. The main idea behind layering is that each layer is responsible for different tasks. The Open System Interconnection (OSI) Reference Model defines seven layers [5]. 1. Physical Layer. This layer deals, for instance, with conversion of bits to electrical signals, bit level synchronization. 2. Data Link Layer. It is responsible for transmitting information across a link, detecting data corruption, and addressing. 3. Network Layer. The layer enables any party in the network to communicate with each other. 4. Transport Layer. It establishes reliable communication between a pair in the system, deals with lost and duplicated packets. 5. Session Layer. This layer is responsible for dialogue control and changing. 6. Presentation Layer. The main task of this layer is to represent data in a way convenient for the user. 7. Application Layer. Applications in this case include Web browsing, file transferring, etc. The Network Layer is the layer that is the most interesting in the context of this project. The following section gives a better view of this layer. 2.3 Network Layer As was mentioned before, this layer is responsible for enabling the communication between any party. The most used method for transporting data within and between communications networks is the Internet Protocol (IP). 2.3.1 Internet Protocol IP is a protocol that provides a connectionless, unreliable, and best-efforts packet delivery system. More details on these network service types are given below [5]. In a connectionless model the data packets are transferred independently from all others and containing full source and the destination address. It is worth mentioning that another type is the connection oriented model. However, the connection-oriented model and its details are beyond the scope of this project and thus will not be pursued in this report. The reader can consult [5] for further information on this type of service. Unreliable delivery means that packets may be lost, delayed, duplicated, delivered non-consecutively (in an order other than that in which they were sent), or damaged in transmission. 2.4 Internet Protocol Version 4 As we know, IPv4 is the current protocol for communication on the Internet. It is the protocol that underlies most communication on networks today, such as TCP/IP and UDP/IP. The largest weakness of IPv4 is its address space [7]. Each IPv4 address only have 32 bits and consists of two parts, defined as network identifier and host identifier [5]. A standard method of displaying an IPv4 address is as decimal value of four octets, each separated a period, for example: 192.168.2.5. Traditionally [6], IP addresses are presented by classfull addressing. 5 classes of address were created, which is A to E. Class A consists of 16,777,214 hosts while class B consists of 65,534 hosts and class C consists of 254 hosts. Class D is reserved for use with multicasting and class E is a block of IP addresses reserved for future use [7]. The class D and E addresses are not used to address public host, so this leaves the rest of the entire range of IP addresses carved up into classes A C. As soon as a site is connected to the Internet, it needs to be given an entire class C. Assuming that many sites only need one or two addresses then this waste over 200 addresses. Once a site reaches over 254 full addressable machines it would need an entire class B, which would waste over 65,000 addresses and so on. This allocation system is obviously insufficient and wastes much of a limited resource. 2.4.1 Header Header is a part of the IP packet[5]. There is a number of fields in an IPv4 header. Below are the some explanations for each field. 2.4.2.1 Version This field (4-bit long) is used to determine the version of IP datagram that is considered. For IPv4 it is set to 4. 2.4.2.12 Internet Header Length (IHL) The Internet Header Length is the length of the header. 2.4.2.3 Type of Service Theoretically, this field (1 octet long) should indicate something special about the protocol. However, it has never really been used. 2.4.2.4 Total Length Total is the le ngth of data in the fragment plus the header. 2.4.2.5 Identification This field is useful for fragmentation only. Its purpose is to enable the destination node to perform reassembly. This implies that the destination node must know which fragments belong to each other, i.e. the source, destination, and protocol fields should match. 2.4.2.6 Offset Offset indicates the point at which this fragment belongs in the reassembly packet. The field is related to fragmentation mechanism and has similar vulnerabilities as the identification field. 2.4.2.7 Time to Live TTL measures the time duration of the datagram presence in a network. This guarantees that no datagram exists forever in the network. 2.4.2.8 Protocol This field identifies the transport protocols, for example UDP or TCP. Since the field contains an arbitrary value that indicates some protocol, encapsulation of one datagram into another (IP tunneling) is possible. 2.4.2.9 Header Checksum The checksum is used to detect transmission errors. However, this field was removed in IPv6. 2.4.2.10 Source Address. This field specifies the source address. 2.4.2.11 Destination Address The destination address (4 octets long) is specified in this field. No attacks related to this field are known. 2.4.2.12 Options The field (variable size) was designed to improve the IP communication. There are several options defined for this field. Among them are: security, source routing, and route recording. 2.4.2.13 Padding The field (variable size) is used to fill the IP header with zeros if the header length is less than 32 bits. 2.5 Internet Protocol Version 6 IPv6 is a new version that is specified in RFC2460 [5] to overcome the weakness of the current protocol in certain aspect. It uses a 128 bit long address field which is 4 times longer than Ipv4 addresses. This size of address space removes one of the worst issues with IPv4 and IPv6 doesnt have classes of addresses. In general, IPv4 and IPv6 have a similar in their basic framework and also many differences. At a first view, there are obviously differences in the addresses between IPv4 and IPv6. IPv6 addresses range from 0000:0000:0000:0000:0000:0000:0000:0000 to ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff. In addition to this preferred format, IPv6 addresses may be specified in two other shortened formats: Omit leading zeros Specify IPv6 addresses by omitting leading zeros. For example, IPv6 address 1050:0000:0000:0000:0005:0600:300c:326b may be written as 1050:0:0:0:5:600:300c:326b. Double colon Specify IPv6 addresses by using double colons (::) in place of a series of zeros. For example, IPv6 address ff06:0:0:0:0:0:0:c3 may be written as ff06::c3. Double colons may be used only once in an IP address. The IPv6 addresses are similar to IPv4 except that they are 16 octets long. A critical fact to be observed is that the present 32-bit IP addresses may be accommodated in IPv6 as a special case of IPv6 addressing. The standard representation of IPv6 addresses is a hexadecimal value of 16-bit each separated by a colon. Not only does IPv6 have different address representation, but it also discards the previous concept of network classes. The 6-byte addresses are very popular in the 802 LANs. The next generation of LANs will use 8-byte address space specified by the Institute of Electrical and Electronics Engineers (IEEE) [9]. Thus, the IPv6 addresses should be 8 bytes long. 2.5.1 IPv6Header Some of  IPv4 header fields excluded in IPv6, and some of  them has been made optional. As a result of this the packet processing time and packet header size is reduced. The header consists of two parts, which are: the basic IPng header and IPng extension headers. 2.5.2.1 Version Th is field (4-bit long), same as in IPv4 case, is used to determine the version of IP datagram and is set to 6 in the present case. This field is the same in both versions. The reasoning for this is that these two protocols should coexist during the transition period. 2.5.2.2 Flow Label This field is 20 bits long and, as yet, there is no specific functionality assigned to it. 2.5.2.3 Payload Length Only IPv6 has this field. Since the header length is constant in IPv6, just one field is needed. This field replaces IHL and Total Length fields in IPv4. It carries information about the length of data (the headers are not included). 2.5.2.4 Next Header Next Header field replaces the Protocol field in the IPv4 header. 2.5.2.5 Hops limit This field is a hop count that decrements. This field redefines the Time to Life field present in IPv4. 2.5.2.6 Source Address The source address is indicated by this field (16 octets long). No attacks related to this field have been experienced. 2.5.2.7 Destination Address This field (16 octets long) specifies the destination address. No attacks related to this field are known. IPv6 brings major changes to the IP header. IPv6s header is far more flexible and contains fewer fields, with the number of fields dropping from 13 to 8. Fewer header fields result in a cleaner header format and Quality of Service (QoS) that was not present in IPv4. IP option fields in headers have been replaced by a set of optional extensions. The efficiency of IPv6s header can be seen by comparing the address to header size. Even though the IPv6 address is four times as large as the IPv4 address, the header is only twice as large. Priority traffic, such as real time audio or video, can be distinguished from lower priority traffic through a priority field [8]. Based on the [27] experiment, it clearly show the brake-down of the various headers in both IPv4 and IPv6, it is evident that the overhead incurred is minimal be tween IPv4 and IPv6. In theory, the performance overhead between these two protocols is so minimal that the benefits of IPv6 should quickly overshadow the negatives. Table 1: Packet breakdown and overhead incurred by header information 2.6 Streaming Overview In recent years, there has been major increasing in multimedia streaming application such as audio and video broadcast over internet. The increasing number of internet subscribers with broadband access from both work and home enables multimedia applications with high quality can be delivered to the user. However, since the best effort internet is unreliable with a high packet lost and inconsistency in packet arrival, it does not provide any QoS control. This is a crucial part when dealing with real-time multimedia traffic. The multimedia streaming is a real-time application includes audio and video which is stored in stream server and streamed its content to client upon request. The example includes continuous media server, digital library, and shopping and entertainment services. Prior to streaming, video was usually downloaded. Since, it took a long time to download video files, streaming was invented with the intention of avoiding download delays and enhancing user experience . In streaming, video content is played as it arrives over the network, in the sense that there is no wait period for a complete download. Real-time streaming has a timing constraint such that the data are played continuously. If the packet data are not arrive in time, the playback is paused and will cause the in smoothness in multimedia presentation and its definitely annoying to the user. Because of this factor, multimedia streaming require isochronous processing and QoS [10] from end to end view. The lack of QoS has not prevented the rapid growth of real-time streaming application and this growth is expected to continue and multimedia traffic will form a higher portion of of the internet load. Thus, the overall behavior of these applications will have a significant impact on the other internet traffic. 2.7 Downloading Versus Streaming Application Basically downloading applications such as FTP involve downloading a file before it is viewed by a user. The examples of multimedia downloading applications are downloading an MP3 song to an IPod or any portable device, downloading a video file to a computer via P2P application such as BitTorrent. Downloading is usually a simple and easiest way to deliver media to a user. However, downloading has two potentially important disadvantages for multimedia applications. First, a large buffer is required whenever a large media file such as MPEG-4 movie is downloaded. Second, the amount of time required for the download can be relatively large, (depends on the network traffic), thereby requiring the user to wait minutes or even hours before being able to view the content. Thus, while downloading is simple and robust, it provides only limited flexibility both to users and to application designers. In contrast, in the streaming mode actually is by split the media bit stream into separate packet which can be transmitted independently. This enables the receiver to decode and play back the parts of the bit stream that are already received. The transmitter continues to send multimedia data packet while the receiver decodes and simultaneously plays back other, already received parts of the bit stream. This enables low delay between the current data is sent by the transmitter to the moment it is viewed by the user. Low delay is of paramount importance for interactive applications such as video conferencing, but it is also important both for video on demand, where the user may desire to change channels or programs quickly, and for live broadcast, but the delay must be finite. Another advantage of streaming is its relatively low storage requirements and increased flexibility for the user, compared to downloading. However, streaming applications, unlike downloading applications, have deadlines and other timing requirements to ensure continuous real-time media play out. This leads to new challenges for designing communication systems to best support multimedia streaming applications. [12] 2.8 Standard/Protocols for Streaming A good streaming protocol is required to achieve a quality of continuous playback in multimedia streaming over the internet with the short delay when a user downloading a multimedia content over the internet. The streaming protocol provides a service such as transport, and QoS control mechanism including quality adaptation, congestion control and error control. The streaming protocol is built on the top of network level protocol and the transport level protocol. The multimedia streaming protocol is based on IP network and â€Å"User Datagram Protocol† (UDP) is mainly used, despite of some streaming application using TCP. Like TCP, UDP is a transport layer protocol, but UDP is a connectionless transport protocol. UDP does not guarantee a reliable transmission and in order arrival packet. Under UDP also, there is no guarantee that is packet will arrive to its destination [16]. The UDP packet may get lost in the network when there is a lot of network traffic. Therefore, UDP is not suitable for data packet transfer where a guarantee delivery is important.UDP is never used to send important data such as webpage, database information, etc; UDP is commonly used for streaming audio and video. Streaming media such as Windows Media audio files (.WMA), Real Player (.RM), and others format use UDP because it offers speed. The reason UDP is faster than TCP is because there is no form of flow control or error correction. The data sent over the Internet is affected by collisions, and errors will be present. Remember that UDP is only concerned with speed. This is the main reason why streaming media is not high quality. However, UDP is the ideal transport layer protocol for streaming application which the priority is to transfer the packet from the sender to its destination and does not contribute any delay which is the result of the transmission of lost packets. Since UDP does not guarantee in packet delivery, the client needs to rely Real time Transport Protoco l (RTP) [10]. The RTP provides the low-level transport functions suitable for applications transmitting real-time data, such as video or audio, over multicast or unicast services The RTP standard consists of two elementary services, transmitted over two different channels. One of them is the real-time transport protocol which carries the data and the other works as control and monitor channel named RTP control protocol (RTCP) [13]. RTP packets are encapsulated within UDP datagrams. This step incorporates a high throughput and efficient bandwidth usage. The RTP data packets contain a 12 byte header followed by the payload, which can be a video frame, set of audio samples etc. The header includes a payload type indicating the kind of data contained in the packet (e.g. JPEG video, MP3 audio, etc), a timestamp (32 bits), and a sequence number to allow ordering and loss detection of RTP packets [11]. According to the standard [14], the transport of RTP streams can use both UDP and TCP tr ansport protocols, with a strong preference for the datagram oriented support offered by UDP. The primary function of RTCP is to provide feedback on the quality of the data distribution. The feedback may be directly useful for control of adaptive encodings along with fault diagnostics in the transmission. In summary, RTP is a data transfer protocol while RTCP is control protocol. The Real-time Streaming Protocol (RTSP) [25] is a client-server signaling system based on messaging in ASCII format. It establishes procedures and controls, either one or more time-synchronized streams continuous media such as audio and video. The protocol is intentionally similar in syntax and operation to HTTP and therefore hires the option of using proxies, tunnels and caches. RTSP and works well both for large audiences, and single-viewer media-on-demand. RTSP provides control functionality such as pause, fast forward, reverse and absolute positioning and works much like a VCR remote control. The nec essary additional information in the negotiation is conducted in the Session Description Protocol (SDP), sent as an attachment of RTSP appropriate response [13]. The Requirement for Multimedia Application Various multimedia applications have different requirements for QoS describes in the following QoS parameters such as throughput, delay, delay variation (jitter) and packet loss. In most cases, the application of QoS requirements can be determine by the user which are the factors that affect the quality of applications [17]. For example, from experimenting concluded that acceptable quality, one-way delay requirements for interactive voice should be less than 250 ms. This delay includes the value of the delays imposed on all components of the communication channels, as a source of delay, transmission delays, delays in the network and the determination of the delay. There are some factors which affect QoS application requirements such as interactive and noninteractive applications, User/Application characteristics (delay tolerance and intolerance, adaptive and nonadaptive characteristics) and application criticality (Mission-critical and non-mission-critical applications) [15]. The t hree types for this application requirement will be discuss in next section. 2.10.1 Interactive and Noninteractive Applications An interactive application involves some form of between two parties such as people-to-people, people-to-machine or machine-to-machine. An example of interactive applications is: People-to-people application such as IP telephony, interactive voice/video, videoconferencing People-to-machine application such as Video-on-demand (VOD), streaming audio/video Machine-to-machine application: Automatic machine control The time elapsed between interactions is essential to the success of an interactive application. The degree of interactivity determines the level of severity or delay the requirement. For example, interactive voice applications, which involve human interaction (conversation) in real time, are stringent requirements of delay (in order of milliseconds). Streaming (play), video applications involve less interaction and do not require real-time response. Applications streaming, therefore, are more relaxed requirements of delay (in order of seconds). Often applications tolerance delay is determined by users tolerance delay (ie, higher delay tolerance leads to more relaxed delay requirements). Jitter delay is also related to QoS support for interactive tasks. The delay jitter can be corrected by de-jittering techniques buffer. However, the buffer introduces delay in the original signal, which also affects the interactivity of the task. In general, an application with strict requirements delay also has a strict delay jitter requirements [15]. 2.10.2 Tolerance and Intolerance Tolerance and intolerance also one of the key that affect in QoS parameter values require by the user. Latency tolerance and intolerance determines the strictness of the delay requirement. As we already mentioned, streaming multimedia applications are more latency tolerant than interactive multimedia applications. The level of latency tolerance extremely depends based on users satisfaction, expectation, and the urgency of the application such as mission critical. Distortion tolerance to the commitment of the application quality depends on users satisfaction, users expectation, and the application media types. For example, users are more tolerant to video distortion than to audio distortion. In this case, during congestion, the network has to maintain the quality of the audio output over the quality of the video output [15]. 2.10.3 Adaptive and Nonadaptive Characteristics Adaptive and nonadaptive aspects mostly describe the mechanisms invoked by the applications to adapt to QoS degradation and the common adaptive techniques are rate adaptation and delay adaptation. Rate adaptive application can adjust the data rate injected into the network. During network congestion, the applications reduce the data rate by dropping some packets, increasing the codec data compression, or changing the multimedia properties. This technique may cause degradation of the perceived quality but will keep it within acceptable levels. Delay-tolerant adaptive applications are tolerate to a certain level of delay jitter by deploying the de-jittered buffer or adaptive playback technique. Adaptation is trigged by some form of implicit or explicit feedback from the network or end user [15]. 2.10.4 Application Criticality Mission-critical aspects reflect the importance of application usage, which determines the strictness of the QoS requirements and Failing the mission may result in dis astrous consequences. For example: Air Traffic Control Towers (ATCTs): The Traffic controller is responsible to guide the pilot for direction, takeoff and landing process. Life and death of the pilot and passenger may depend on the promptness and accuracy of the Air Traffic Control (ATC) system. E Banking system: The failure of this system may lead to the losses to the bank and user is unable to make an online transaction (view account summary, account history, transaction status, manage cheques and transfer funds online) and to make a online payment ( loans, bills, and credit card) and other transaction. 2.10.6 Examples of Application Requirements Video applications can be classified into two groups: interactive video (i.e., video conferencing, long-distance learning, remote surgery) and streaming video (i.e., RealVideo, Microsoft ASF, QuickTime, Video on Demand, HDTV). As shown in table 2, video applications bandwidth requirements are relatively high depending on the video codec. Video codec Bandwidth Requirement Uncompressed HDTV 1.5 Gbps HDTV 360 Mbps Standard definition TV (SDTV) 270Mbps Compressed MPEG2 25-60 Mbps Broadcast quality HDTV 19.4 Mbps MPEG 2 SDTV 6 Mbps MPEG 1 1.5 Mbps MPEG 4 5 kbps 4 Mbps H.323 (h.263) 28 kbps 1 Mbps Table 2 : Video Codec Bandwidth Requirement [15] 2.11 Packet Delay Delay has a direct impact on users satisfaction. Real-time media applications require the delivery of information from the source to the destination within a certain period of time. Long delays may cause incidents such as data missing the playback point, which can degrade the quality of service of the application. Moreover, it can cause user frustration during interactive tasks. For example, the International Telecommunication Union (ITU) considers network delay for voice applications in Recommendation G.114 and defines three bands of one-way delay as shown in table 2. Range in Millisecond (ms) Description 0 150 Acceptable for most user application. 150 400 Acceptable provided that administrators are aware of the transmission time and the impact it has on the transmission quality of user applications. 400 Unacceptable for general. However in certain cases this limit exceeds. Table 3: Standard for delay limit for voice In the data transmission process, each packet is moving from its source to its destination. The process of data transmission usually starts with a packet from a host (source), passes through a several series of routers, and reaches at another host (destination). The packet usually may expose from several different types of delay at each node along the path while it is traveling form one node (host or router) to another node (host or router). The most important of these delays are the nodal processing delay, queuing delay, transmission delay, and propagation delay; together these delays accumulate to give the total nodal delay [18]. 2.11.1 Types Of delay As a part of end -to-end route between source and destination, packet is sent from upstream node through router A to router B. When the packet arrives at router A , router A examines the packets header in order to place it in appropriate link. Then router A directs it to that pa rticular link. Processing Delay: This delay requires time to consider the package of header and to determine the direction of the packet is part of the processing delay. Processing delays may include other factors such as the time needed to check for errors in the bit level package that occurred in the transmission of the packet of bits from upstream unit to router A. delays in processing high-speed routers are usually in the order of microseconds or less. Then nodal processing, router directs the packet that precedes a link to router B. Queuing Delay (buffering): On the queue, a packet experience queue delay, as it waits to be transferred to the link. The queue delay of a package, will depend on the number of previously-arriving packets are pending and awaiting transfer to the link. The delay of a certain packets can vary greatly from the packet to the packet. If the queue is empty, and no other packets are being transferred, then the packet, the queue delay is zero. On the o ther hand, if the traffic is heavy and many other packages are also ready to be transferred row will be a long delay. In queue delays may be on the order of microseconds to milliseconds in practice. Transmission Delay: Assuming that packets are transmitted in first-come-first served (FIFO) manner, as is common in packet-switched networks, our packet can be transmitted only after all before the packets are sent. Denote the length of the packet by L bits, and denote the transmission rate of the link from router A to router B by R bits/sec. The rate R is determined by the transmission rate of the link to router B. For example, for a 10 Mbps Ethernet link, the rate is R = 10 Mbps; for a 100 Mbps Ethernet link, the rate is R = 100 Mbps. The transmission delay (also called the store-and-forward delay, is L/R. This is the amount of time to push (that is, transfer) all the packets of bits into the link. Transmission delay, typically on the order of microseconds to milliseconds in practic e. Propagation Delay: Once a bit is pushed into the connection, it needs to spread the router B. The time required to propagate from the beginning of connection to the router B is the propagation delay. A bit propagates the speed of propagation of the link. The propagation speed depends on the physical link (ie, fiber optic, twisted pair of copper wires, and so on) and is in the range of 2 †¢ 108 meters/sec to 3 †¢ 108 meters/sec which is equal to, or slightly less, the speed of light. The propagation delay is the distance between two routers divided by the propagation speed. That is, the propagation delay is d/s, where d is the distance between router A and router B and s is the propagation speed of the link. Once the last bit of the packet propagates to node B, it and all the preceding bits of the packet are stored in router B. The whole process then continues with router B now performing the forwarding. In wide-area networks, propagation delays are on the order of milliseconds. If we let dproc, dqueue, dtrans, and dprop denote the processing, queuing, transmission, and propagation delays, then the total nodal delay is given by dnodal = dproc+ dqueue + dtrans + dprop. The contribution of these delay components can vary significantly. 2.11.2 End to end delay Multimedia streaming require bound to bound delay so that multimedia data packet can arrive at the client in time to be decode and played. The definition of end to end delay for a streaming system. Ti is the transmission time of packet i from the server. PLi is the play out time of packet i in the player. Ai is the arrival time of packet i. If packet i arrive in time, it is used for the playback. However if packet i does not arrive in time, it cannot used for the playback and will cause a multimedia dropout problem. The outcome of the multimedia dropout is a deprivation of the multimedia playback because the streaming buffer does not have enough playtimes in which the conten t can be played. Furthermore, if the multimedia packet arrives too slowly and beyond a delay bound, then it will not be used in real time playback. As a result, such a packet are rendered useless even though they have successfully arrived at the client and such multimedia data packets are waste of network resources, and create a congested network traffic [10]. 2.11.3 Delay Jitter Multimedia streaming require a bound end to end delay in order to provide a continuous and smooth multimedia playback to users. However, the end to end delay may varies with the network condition. Therefore it is unpredictable and difficult to control. If the end to end delay is not bounded, it will causes a delay jitter problem. Figure 9 above shows the packet arrival time in the client buffer. PLi is the layout time of packet i while Ai is the arrival time of packet i to the client buffer. EAi is the expected arrival time of packet i. This is related to the end to end delay of a streaming packet tra nsferred from the server to the client. The delay can be defined as the difference between the arrival time Ai and expected arrival time EAi, i.e. Ji = |Ai EAi| The delay jitter problem complicated the synchronization problem between packets from a single media stream, or between packet from a different media stream. When there is too much delay jitter when streamed over the network renders the stream is useless when received by the client and this leads to degradation in the QoS. This is because it is difficult to re-adjust the timing relationship between multimedia packets from the same/several media stream so as to ensure a synchronized playback of information. The conflicting goals in minimizing delay and removing delay jitter have engendered various scheme capable of adapting a delay jitter buffer size that match the time varying requirement of the network delay jitter removal [10]. There are several techniques to deal with with delay jitter at the receiver end [12]. As shown in Figure 10, the packets travel through the network and experience different end-to-end delays, reaching the destination with timing distortions (incomplete or delayed signal) relative to the original traffic. For example, in technique A, the receiver playbacks the signal as soon as the packets arrive. The playback point is changed from the original timing reference. This introduces distortion in the playback signal. In technique B, the receiver playbacks the signal based on the original timing reference. The late packets that miss the playback point will be ignored. This also introduces distortion. In technique C a de-jittered buffer is used. All packets will be stored in the buffer and held for some time (offset delay) before they are retrieved by the receiver with the original timing reference. The reliability of the signal will be maintained as long as there are packets available in the buffer. Large delay jitter requires large buffer space to hold the packets and smoo th out the jitter. A large buffer may lead to large delays, which will be eventually constrained by the application delay requirement. In summary, there is a tradeoff between the following three factors: de-jittered buffer space, delay requirement, and fidelity of the playback signal. The other alternative is by using gateway and data method for minimizing the increase of delay by dejitterizing [20]. A gateway for interconnecting two network, may comprised: a receiver for receiving from a first network a plurality of data units in at least one form; a controller for temporary storing the data units received by the receiver and for outputting the data units on the basis of the dejitterizing capability of destination terminal served by a second network, thereby reducing jitter among delays of the data units; and transmitter for sending data units to the output by the controller to the destination terminal through the second network. 2.12 Quality Of service (QoS) And Technical Issues in Multimedia Networks QoS is used to describe overall experience an application or a user will receive over network and usually referring to network operator or Internet service provider (ISP) commitment in providing and maintaining acceptable value of parameter or characteristic of user application requirement and user expectation [16]. Providing QoS guarantee sometimes can be difficult in networks that offer best effort service such as internet. IP does not guarantee about when data will arrive, or how much data it can deliver. â€Å"According the recommendation of the Telecommunication standardization sector of International Telecommunication Union (ITU-T), Quality of Service is defined as â€Å"the collective effect of service performance which determines the degree of satisfaction of a user of that serviceâ€Å"[21]. QoS has become an issue when designing multimedia streaming system. QoS parameters have to be presented in all components in such systems, from an application to a communication lev el in order to insure a certain level of QoS to a user. An element of a generalized QoS framework have been identified such as OoS principles, QoS specification which is capturing applications QoS requirements, and QoS mechanisms which provide desired end-to-end QoS. As we already mentioned, a different system such as application and network is formulated in different parameter. Thus it is a crucial to interpret user/application QoS into network. Like in video streaming, it has different bit rates and a relation between bit rate of a stream and required bandwidth to carry the stream is obvious e.g. the higher bit rate, the higher bandwidth is needed. These applications have tight resource requirements and can benefit from non-interference to provide forms of progress guarantees. Video stream places high demands for QoS, performance, and reliability on storage servers and communication networks. The necessary traffic management components to support QoS are [22]: Admission control: The admission control component takes into account resource reservation requests and the available capacity to determine whether to accept a new request with its QoS requirements. Scheduling: The scheduling component provides QoS by allocating resources depending on the service requirements. This requires mapping the user-defined QoS requirement to resource allocations for providing the service. Resource management: QoS can be provided using over-provisioning of a network, which increases the cost incurred by the provider. Efficient resource management is a cost-effective solution for the provider and it ensures that applications will get the specified QoS during the course of its execution. Congestion control: Congestion control is required to avoid anything bad from happening inside a network domain. Some applications may not follow the standard protocol description and try to steal resources, thereby deteriorating the QoS of other applications. Mechanisms are needed to recover from congestion and control flows accordingly. Policing/Shaping: Users might send traffic at a rate higher than the agreement. Policing is necessary to monitor these situations, and shaping makes the traffic smooth and reduces its variations over time IP version 6 also includes QoS measures which were developed for IP version 4 but the principles are most the same. QoS makes use of both the Traffic Class and Flow Label headers to categorize network traffic into different priority groups [29]. The Flow Label header is used to uniquely identify a â€Å"flow† of IP packets, such as those belonging to a specific connection. Each flow label is unique per Source and Destination address pair. This allows QoS levels to be requested on a per-flow basis, rather than per individual packet, giving the potential for the sender to request special handling for time-critical data [28]. 2.12.1 End To End QoS Service Level Service levels refer to the end-to-end QoS capabilities, which means that the ability of a network for providing specific services required by the traffic on the network end or alongside. The services vary in their level of quality of service discipline, which describes how the service can be bound by specific ban dwidth, delay, jitter and loss characteristics. There are three levels of QoS services and can be categorize as follows [15] [23]: 2.12.1.1 Quantitative (Guaranteed Services/IntServ) Quantitative (guaranteed) services also call hard QoS, ensure the provision of the quantitative application requirements. The main protocol that works with this architecture is the Reservation Protocol (RSVP) which has a complicated operation and also inserts significant network overhead [26]. This is an absolute the reservation of network resource for specific traffic. The services guaranteed to ensure the performance of network (i.e. bandwidth, delay, delay jitter), in statistical terms or deterministic. For example, networks to guarantee the minimum bandwidth provided to guarantee an application or bound to delay the delivery of packages within a certain value and suitable for applications that guarantee overall performance as mission critical and interactive applications [15]. 2.12.1.12 Qua litative (Differentiated Services/DiffServ) Qualitative (differentiated) services, also called as soft QoS, provide statistical preference not a hard and fast guarantee. DiffServ architecture is more flexible and efficient as it tries to provide Quality of Service via a different approach [26]. Some traffic is treated better delay to one class of applications than to another class of applications. An application that belongs to a higher priority class will receive service before applications that belong to a lower priority class. 2.12.1.13 Best Effort Services (Lack of QoS Service) Make the greatest efforts to provide network services, without any guarantees of performance. All traffic treated equally. Service is adequate for data traffic such as FTP, e-mail and web pages. It does not require a minimum bandwidth or time delivery. 2.13 QoS Parameter Certain applications can tolerate some degree of traffic loss while others cannot. All these requirements are expressed using QOS parameters which are usually grouped according to several criteria. The Qos parameter below may relevant in multimedia application: Throughput (bandwidth) Delay Delay variation (jitter) Packetloss 2.13.1 Throughput (Bandwidth) From the application point of view, throughput or bandwidth basically refers to the data rate (bits per second) generated by the application. Throughput is measured in the number of bits per second. Bandwidth is considered to be the network resource that needs to be properly managed and allocated to applications. The throughput required by an application depends on the application characteristics. For example, in a streaming video application, different video properties generate different throughput (see table 1). A user can select the video quality by varying the following video properties such as frame size (pixel), frame rate (number of frame per second), colour depth (possible colors represented by a pixel) and compression (MPEG1, MPEG2, MPEG4) [15]. 2.13.2 Delay As we already discuss in section 2.11, Delay has a direct impact on users satisfaction. Real-time media applications require the delivery of information from the source to the des tination within a certain period of time. Long delays may cause incidents such as data missing the playback point, which can degrade the quality of service of the application. The delay consists of four types which is processing delay, queuing delay, transmission delay and propagation delay (refer to section 2.11.1 for explanation). 2.13.3. Delay Variation (jitter) As we already covered about this subtopic, refer to section 2.11.3 for explanation. 2.13.4 Packet Loss Packet loss has directly impact the overall quality of the application. It undermines the reliability of data or disrupts the service. At the network level, packet loss can be caused by network congestion, which results in data packets. Another cause of the loss is caused by bit errors that occur because of a communication channel noise such as in a wireless medium channel. There are several techniques identify to recover from packet loss or error such as retransmission packages, the correction of errors in t he physical layer, or codec to the application layer, which may offset or conceal the loss [15]. 2.14 Congestion-Management Congestion management use marking on each packet to determine in which queue to place packets, mechanisms queuing algorithms each interface must have a queuing mechanism to prioritize transmission of packets.â€Å"Queuing algorithms take effect when congestion is experienced. By definition, if the link is not congested, then there is no need to queue packets. In the absence of congestion, all packets are delivered directly to the interface† [23]. Congestion may occur at any point where there re-points of speed mismatches, aggregation or confluence, queuing manages congestion to provide bandwidth and delay guarantees. Congestion management is sophisticated queuing technology, there are algorithms of queuing or congestion management QoS features: FIFO (first in, first out) PQ (priority queuing) WFQ (weighted fair queuing) WRR(weight round robin) 2.14.1 FIFO (First-in-First-Out) First-in-First-Out (FIFO) is probably the simplest queuing strategy: all packets are stored in a single queue in the order of their arrival and are served sequentially, regardless which QoS requirements they have. FIFO provides best effort service [15] and there is no service differentiation is possible and, therefore, no advantage can be taken from the lower QoS demand of tolerant traffic in order to increase the link utilization [24]. 2.14.2 PQ (priority queuing) PQ ensures that important traffic gets the fastest handling at each point where it is used. It was designed to give strict priority to important traffic. In PQ, each packet is placed in one of four queues—high, medium, normal, or low—based on an assigned priority. Packets that are not classified by this priority list mechanism fall into the normal queue [23]. The queues with higher priorities are served exhaustively before queues with lower priority. In particular, the tail of the waiting time distribution can be non-exponential because a majority of high priority traffic can delay low priority traffic extensively [24]. 2.14.3 WFQ (weighted fair queuing) Weight Fair Queue schedules packets based on the weight ratio of each queue [15]. â€Å"WFQ is one of Ciscos premier queuing techniques†. It is a flow-based queuing algorithm that creates bit-wise fairness by allowing each queue to be serviced fairly in terms of byte count. For example, if queue 1 has 100-byte packets and queue 2 has 50-byte packets, the WFQ algorithm will take two packets from queue 2 for every one packet from queue 1. This makes service fair for each queue: 100 bytes each time the queue is serviced [23]. WFQ ensures that queues do not starve for bandwidth and that traffic gets predictable service. Low-volume traffic streams which comprise the majority of traffic receive increased service, transmitting the same number of bytes as high-volume streams. WFQ is d esigned to minimize configuration effort, and it automatically adapts to changing network traffic conditions. [23]. 2.13.4 WRR (weight round robin) Round Robin (RR) takes turns for servicing its queues and every queue receives the same share of bandwidth. With WRR, weights are assigned to the queues and some queues can be served more frequently by prescribing. a serving cycle. The network capacity is shared among queues 0 and 1 with a ratio of 2:1 by using the serving cycle 0-0-1. If a queue has nothing to send, the next queue in the cycle is served [24].

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.