UK Broadband Internet Access - PhDify.com
In recent years the demand for high-speed networking has been growing at an exponential rate. The advent of the Internet has had a significant impact on access networks, particularly in the provision of various communication services. While the expansion of the Internet may be both a cause and an effect of this growth, it is not the only factor that drives the demand for broadband connectivity. As the cost of semiconductor devices continues to fall while the capabilities of such equipment increase, new applications for equipment built with such devices are continually being developed.
In short it may be said that broadband communications technologies are the convergence of television, telephone and computer networks, which enable the interactive communication of voice, data and video. Literature suggests that there is no established definition of ‘broadband’ and the definition tend to change with advent of the underlying technology. In the UK, services provided at speeds greater than 384 Kbps are called higher bandwidth, a bandwidth over 2 Mbps is called current generation broadband and a bandwidth over 10 Mbps is called next generation broadband (Office of the e-Envoy, 2001). Currently in UK speeds greater than 256kbps for the home users are categorized as broadband users but contrastingly in Korea users having bandwidth over 2mbps are considered to have broadband.
Owing to these variations the UK Broadband Stakeholder Group (2001, p. 3) produced a useful definition, “Always on access, at work, at home or on the move provided by a range of fixed line, wireless and satellite technologies to progressively higher bandwidths capable of supporting genuinely new and innovative interactive content, applications and services and the delivery of enhanced public services”. It is very evident from the above discussion and definition that broadband completely relies on the underlying technology and its advances. In this report we will consider the various technologies available along with there implication from several perspectives on the UK home and small business users.
Originally conceived at Bellcore, Asymmetric DSL (ADSL) is a Telco-inspired service that offers a high-speed digital service and analog voice service over a local loop. ADSL is a key technology for broadband. It has the ability to provide megabit data rates over the existing copper access infrastructure of the Telco’s. ADSL today is capable of providing up to 9 Mbps downstream to the subscriber and 1 Mbps upstream from the subscriber—all over the existing copper access lines (Greggains, D. 1997). With nearly 700 million copper telephone lines in the world today, all linking direct to the subscriber or end user, this enormous infrastructure can be harnessed through the power of ADSL.
An ADSL local loop is for the exclusive use of the subscriber, with no contention for bandwidth on that local loop. Therefore, despite the inherently higher bandwidth of cable, a crossover point exists where the ADSL service average bandwidth per user can exceed HFC, given sufficiently high use of HFC. Another important feature of ADSL is that it provides for passive transmission of analog voice service (Gaggl et al, 2003).
While there are two widely used ADSL line codes, DMT (Discrete MultiTone) and CAP (Carrierless Amplitude/Phase modulation), each takes a different approach to the technical challenge and both achieve broadly the same results. CAP is a single-carrier technique that uses a wide passband. DMT is a multiple-carrier technique that uses many narrowband passbands as individual carriers. The two have a number of engineering differences, even though they ultimately can offer similar service to the network layers discussed previously.
CAP and DMT advocates certainly differ on the merits of their respective modulation schemes, but they are coming together on further architectural elements of an ADSL system. The ADSL Forum is, in theory, neutral on modulation technique, leaving this discussion to ANSI. Instead, the ADSL Forum is intended to fill gaps left by ANSI with the objective of creating services from modem standardization.
While ADSL achieves a very high data rate over the existing copper pair, the achievable data rate is limited by the gauge of the copper line, the quality of the line, and crosstalk and attenuation; all these factors increase with distance (McMichael). With ADSL data rates available at the end of an ordinary copper subscriber line, home working with direct access to office LANs will become truly practical. Today, home working with access to an office LAN is still comparatively rare and costly. A practical connection requires either a leased line or, at a minimum, ISDN.
Both home working and Internet access are likely to be well suited to the asymmetrical characteristics of ADSL. In both cases, the data transmission will be in bursts and likely to need to be faster to the user than from the user (Rednet, ADSL FAQ). The home worker is extremely unlikely to be generating massive data volumes but is very likely to be accessing large corporate databases and information resources by connecting into an office LAN.
While ADSL was first conceived by Bellcore in 1989, it is still undergoing rapid advances in development. The most recent step has been the addition of rate adaptive features. This variation of ADSL is called Rate adaptive ADSL (RADSL). Under RADSL, the modems rapidly test the line characteristics and determine the fastest reliable rate at which they can operate; this rate is then set for the duration of the connection. RADSL offers a number of important benefits:
- Modems will always operate at the maximum line speed available.
- Line speeds do not have to be predetermined at the time of modem installation
- With the variation in line characteristics over time, the modems automatically adapt.
Despite these advantages there are some challenges faced by RADSL. RADSL creates marketing and administrative problems for setting pricing and bandwidth guarantees. What do you charge the user when the user does not know in advance what speed he will get? Furthermore, the bit rate can change during a session as line conditions change. How this affects pricing is to be determined.
ADSL is currently a standard application in the field of wired communications. With ADSL2 and ADSL2+, a new standard for increased data rates and improved loop reach has become available. ADSL2 increases the bit-rate and line-reach performance of ADSL, enabling up to 256 kb/s of additional performance on typical lines. The new all-digital mode extends ADSL transmission through the 0 to 25 kHz band to provide a total of 32 or 64 upstream tones. This enables an additional 256 kbit/s upstream data rate in addition to the other performance improvements described above, and is particularly important for improving performance on long lines.
Fast start-up is provided from stand-by mode, sleep mode, and as an error recovery during Showtime mode. The L2 low power mode has been added to enable statistical power savings based on user activity. Service is kept alive and full-rate operation is restored within 0.5 ms by maintaining low-bit-rate transmission during the L2 mode. Since signal energy is maintained during the L2 mode, non-stationary crosstalk behavior is minimized.
xDSL is an enhanced copper system technology including ADSL, HDSL (high-bit-rate DSL), SDSL (single-line DSL), VDSL, and SHDSL (symmetric high-bit-rate DSL). In DSL options, there is a trade-off between distance and transmission capacity (speed). ADSL uses one twisted cooper cable for asymmetric transmission, while HDSL offers cost-effective T1 or E1 services on existing copper wire (Ims et al, 1997).
The term Single-Line Digital Subscriber Line (SDSL) has two interpretations. It has been used to describe a proprietary, one-pair, symmetric service using a variety of modulation techniques. It is also the official designation of the ETSI project to develop a standard based on ANSI HDSL-2, but providing variable-rate data, voice, and ISDN without the use of splitters. HDSL-2 is intended to have a single speed in both directions, namely T1. SDSL is expected to be rate adaptable, with a maximum of 2 Mbps.
HFC, two-way upgrades and the commitment to digital services (TV and data) solve operational problems of cable operators and expand their product offerings. Consumers get more channels, programmers get more shelf space, Internet service providers get the speed necessary to attract advertising, and cable operators can offer new services. These new services have propelled the stock prices of cable operators and their suppliers to all time highs.
The intent of data services over cable is to provide high-speed Internet access for computers with Ethernet or Universal Serial Bus (USB) ports. Personal computers, including recent models of Apple Computer, support both Ethernet and USB. Cable networks can provide high-speed, always connected services for relatively low costs. Cost advantages arise because cable provides a natural multiplexing function (Mayhew and Stockton, 1998). That is, one cable port at the head end can connect hundreds of users simultaneously. Telephone networks, on the other hand, require a separate line card for each phone line.
Widespread industry development of cable for data depends on resolving several technical challenges:
1. Competition from xDSL and other services - The lucrative market for high-speed data services will attract service providers using other technologies.
2. Return path noise problems - The return path is enabled in the frequency range of 5 to 42 MHz. This is low frequency that has good attenuation properties. On the other hand, because there is heavy use of those frequencies by other services, ingress noise is a problem.
3. Scaling techniques - Because cable is a shared medium, care must be taken to avoid congestion as the number of users grows and the usage of each user grows. In both cases, there needs to be a plan for how to manage scaling for video and ATM for data services and control functions.
Fiber access networks, namely Fiber to the Building (FTTB), Fiber to the Curb (FTTC), and Fiber to the Home (FT TH), are technologies to move fiber closer to the home - and, in the case of FTTH, into the home. These are referred to collectively as FTTx.
Current strong demands on computer communication services like Internet access services require economical solutions to provide broadband capabilities to access network. Several approaches to broadening the so-called access bottleneck have been proposed and tested: HFC using cable modems, ADSL using metallic cables, wireless access, FTTC using VDSL, and fibre-to-the home FTTH . Among these approaches, an economical FTTH system is the key to realizing the service platform of the multimedia era (Bessho, 1997). Fiber-To-The-Home (FTTH) has been considered an ideal solution for access networks since the invention of optical fiber communications because of huge capacity, small size and lightness, and immunity to electromagnetic interference of optical fibers.
FTTH comes in two forms. First is a shared form very much like HFC. It has a shared forward channel and point-to-point reverse channels with a MAC protocol to arbitrate return traffic contention. This form of FTTH is called passive optical networks (PON). The second form is a point-to-point optical service wherein each residence has a dedicated optical channel to the carrier that is shared only by the devices in the home, called dedicated FTTH.
Because optical fibres are widely used in backbone networks, Wide Area Networks (WANs), and Metropolitan Area Networks (MANs), and are also being deployed in Local Area Networks (LANs) with the introduction of new optical Ethernet standards, the implementation of the FTTH in access networks, which are also called _the last mile_, will complete all-optical-network revolution. Full access network opticalization, namely, fibre-to-the-home FTTH system realization, is the key to providing various kinds of advanced services in conjunction with computer use.
A detailed review of the benefits and challenges for FTTH has been provided in Appendix B of this report.
Cables and wires are not without their problems. Digging trenches or climbing poles for installation can involve difficulties, including problems of construction permits and easements, aesthetics of aerial cables, and backhoes that can inadvertently dig up cables all of this can add up to high installation costs. Furthermore, the cable may be installed in the wrong place, such as an area with a disappointing market for services. In addition, air doesn't rust or fall down in bad weather, as cable and wiring can. To some observers including the operators themselves the fixed networks of wired systems may look like vulnerable high-capital assets in a world of fast-changing technologies. This led to the increased need for wireless and mobile processors (Verbauwhede and Nicol, 2000). Several standards also emerged with time which will be discussed later in this report.
Wireless Access Networks utilize radio frequency (RF) spectrum, which is generally defined as the frequency range of 300 kHz through 300 GHz. The frequency that the network is using determines the amount of bandwidth that is available (Morais, 2004). The greater the frequency more is the available bandwidth. The radius of a wireless service is known as the footprint of the network.
The footprint varies from a 5 Km radius to the whole country. A larger footprint ensures millions of subscribers with a single transmitter. This reduces cost and has the side benefit of enabling simultaneous reception by every subscriber. The drawback is that interactive, two-way communications is inhibited as the footprint increases. A larger footprint means that potentially more subscribers would be communicating with the network provider simultaneously. Return-path arbitration becomes increasingly problematic as the footprint increases. One of the main advantages of fixed wireless is the ability to connect with users in remote areas without laying new cables.
Some wireless communications, such as microwave relay, are point-to-point communications: One transmitter is communicating with exactly one receiver. These systems are used to connect corporate sites to the backbone network via line-of-sight (LOS) links spanning few tens of kilometers (O’Reilly et al, 2000). The alternative is point-to- multipoint (PMP) communication: One transmitter communicates with multiple receivers simultaneously, as in satellite and broadcast television. The advantages of point-to-point communication are privacy, guaranteed bandwidth, and the absence of a bandwidth arbitration mechanism.
On the other hand, point-to-point technology also relies on a transmitter and receiver apparatus for each potential session, which increases cost (Morais, 2004). On the other hand point-to-multipoint communications are the one in which one transmitter communicates with multiple receiver simultaneously and is seen as an important cost reduction technique. This however requires an arbitration mechanism and hence make the PMP software very complex. Providing tow-way PMP capability is the real challenge of wireless access.
The IEEE 802.16 standard, often referred to as WiMax, heralds the entry of broadband wireless access as a major new tool in the effort to link homes and businesses to core telecommunications networks worldwide (Eklund et al, 2002). WiMax is a further addition to the existing broadband options like DSL, Cable, WiFi and promises to rapidly provide broadband access to locations in the world’s rural and developing areas where broadband is currently unavailable (Ghosh et al, 2005). Though Wireless broadband systems have been used for several years this (WiMax) is a relatively new development compared to others and is in its early years of development.
Satellites are well suited for carrying typical broadband services. A satellite-based infrastructure can in many cases be established to offer widespread service provision with greater ease and simplicity than an infrastructure based on terrestrial broadband links (Sun, 2001). Thus, the ability to service many users and solve the expensive ‘last-mile' issue without dedicating to each user cable, fiber, switching equipment ports, etc. makes satellites attractive for broadband communication (Hadjitheodosiou, 1999). Satellites are also attractive for interconnection of geographically distributed high-speed networks. Hence, while much broadband communication today is carried via terrestrial links, satellites will come to play a greater and more important role (Kirtay, 2002).
Figure 1 Two way Satellite system Courtesy BT Openworld
Satellite broadband users receive transmissions via the satellite link (downstream). However, in most cases the users cannot send requests (upstream) because satellite uplinks require expensive ground stations and large antennae. Thus, they still need a modem subscription to an Internet service provider (ISP) in order to complete the transfer. There is another solution using utility (power) lines. This type of system is often termed as one way satellite system. In fact the interactive satellite television is a good example for explaining one way satellite communication. The users receive broadcasts using their mini dish but interact with the system through their phone line. In a two way satellite system as shown in the figure above the user has a satellite dish which has transmitting features also apart from receiving data. There is no need of phone lines hence is appropriate for remote locations though the installation costs can be very high due to the relative infancy of this technology.
Rapid growths in bandwidth with the growth of wireless connectivity to the Internet are increasing levels of broadband mobile Internet connectivity and its market penetration lately. The wireless connectivity reflects the emergence of so-called 3G networks. Although evolved in a different context of mobile technology, 3G wireless is emerging as another option for broadband Internet access. Given the range of possible 3G technologies and their increased costs, handset developers are also unwilling to absorb the costs, leading to increased handset costs to end-users (consumers) and fuelling, in part, the very low interest and uptake in current broadband mobile sales (Sandbach, 2001; AT Kearney, 2002).
There are now so-called fourth-generation(4G) technologies in research laboratories that provide, as part of their basis, means for bridging competing standards (e.g. linking the 3G Universal Mobile Telecommunications System and 802.11 networks as Lucent has recently done) (for more details and discussion of wireless standards, see Smith et al. (2002)).
The major advantage of mobile broadband in comparison to fixed wireless is mobility. Fixed wireless broadband refers devices or systems that are situated in fixed locations, such as an office or home, as opposed to devices that are mobile, such as mobile phones and personal digital assistants (PDAs) (e.g. Palm Pilots). Though there are several challenges to be looked into like security (Krishnamurthy et al, 2002) and other infrastructural issues. Despite of the challenges Haddon (2001) and Haddon et al. (2001) noted, there is an increasing level of domestication or consumption of broadband and mobile access in work and life without a great deal of attention to how this is being done.
National interests in broadband clearly fall within the ICTs policy trend. Broadband diffusion and capacity development are central to debates in many countries surrounding the role of the government in developing broadband capacity, particularly focusing on the use of public money (e.g. Broadband Stakeholder Group, 2001). In the UK, for example, the Broadband Stakeholder Group (2001) recommended a strategy for accelerating broadband penetration that included 15 strategic recommendations in three areas: accelerating market driven deployment and take-up, enabling public sector driven deployment and use and ensuring appropriate regulation. Policy is typically enacted through regulation (or its absence).
The regulatory environment in telecommunications is clearly in flux, but some of the basic structures remain. With the advent of the Internet these services and their infrastructures have begun to converge, making the separate regulatory structures for each less meaningful, even as new regulatory structures are being developed. A new regulator called the Office of Communications (OFCOM) was created in the UK to ‘place content and competition regulators, telecommunications and broadcasting regulators on the same premises’ (Office of Telecommunications, 2001).The Prime Ministers strategy office has announce a new Digital strategy for the future which will be developed by the department of trade and industry along with the Broadband Stakeholder Group .
Some related Legislations that have been passed or are under consideration are as follows:
The following figure represents the number of users in the United Kingdom who have access to broadband as depicted by the latest survey produced by OFCOM (2005).
Figure 2 OFCOM report on Broadband Usage
It is evident from the above report that merely 21% of the population in UK uses broadband on an average. The OECD (2004) report further suggest that out of these 70% of the users have DSL connection and 29% have cable connections while 1% of them use other technologies like satellite, wireless or mobile broadband. As far as availability is concerned ADSL will be available to 99% of UK population by the end of 2005, cable access is available to 50% of the UK population and 99% of the UK population already have access to satellite broadband (UK Broadband Monitor, 2005 ).
Trials are ongoing for launch of ADSL2 and ADSL2+ by EasyNet by the end of summer 2005. Trials are also ongoing for easy migration of ISP’s by Freedon2Surf. BT is trying hard to provide broadband to 100% of the UK population and undertakes trials for wireless broadband access frequently. Recently on 20th May 2005 a contract was signed between University of Kent and Telabria ltd. to conduct wireless broadband trials in the Canterbury area . These types of tests are becoming a real necessity so that wireless options are made available to the users in remote locations who do not have the option of ADSL or Cable connection.
In fact for these remote users Wireless and Satellite connections is the only option available but due to the high costs involved in satellite communication very few companies or people can afford it at present. That’s makes wireless the most reasonable option for the rural areas but unfortunately wireless broadband is still in its infancy in the UK and trials are still undertaken as mentioned above to incorporate it at the earliest.
The new challenges for broadband from the infrastructural point of view include networking multiple consumer electronic devices, each with their own distinct networking needs, as well as a new physical environment (Hwang et al., 2002). The network infrastructure design for wireless broadband is argued to change because of lower data rates and reduced reliability, as well as the need for managing communication between physically mobile devices (e.g. Bing, 1999). Wireless issues familiar to the telecommunications world, such as support for roaming and user authentication, become important considerations in the broadband networking world (Ala-Laurila et al., 2001)
The popularity of the Internet has grown remarkably in the last few years, and it has created a demand for ubiquitous networks. A report (Internet Connectivity, 2004) on the survey of Internet Service Providers (ISPs) shows that between November 2003 and November 2004 there was a 4.1 per cent increase in the number of active subscriptions to the Internet. The market share for permanent connections continued to increase in November and now accounts for 37.7 per cent of all connections. Dial-up Internet connections continued to decrease, with a year on year fall to November 2004 of 18 per cent.
According to BBC News (Wakefield, 2004) the number of broadband connections in the UK finally overtook dial-up and in December, 2004 and BT announced that it was making a new broadband connection every 10 seconds. According to figures gathered by industry watchdog, Ofcom, the growth means that the UK has now surpassed Germany in terms of broadband users per 100 people. The UK total of 5.3 million translates into 7.5 connections per 100 people, compared to 6.7 in Germany and 15.8 in the Netherlands (BBC, 2004).
The fact that mobile Internet services provide a data rate of around l0 kb/s are widely accepted, while 1-Mb/s Internet access is regarded as a standard, shows that ubiquitous network environments are gaining recognition. Although third generation (3G) mobile networks are expected to provide high-speed access, the current access speeds of144 kb/s at the maximum in mobile condition are insufficient for many applications (Mitsugi et al, 2003).
In spite of a strong demand, for technical and economic reasons, broadband services are not provided to high-speed and long-distance transportation systems such as express trains, airlines, or ships in which millions of passengers travel each day by any of the current terrestrial networks. Satellite networks are the only cost efficient means to provide broadband services to rural areas.
There are many technological challenges that must be met for both the communication system and on-board antenna system before we will realize our target system (Meguro et al, 2004). To realize mobile broadband satellite communication systems, the most important issues are: maximum spectrum and power resource utilization techniques, ultra lightweight on-board antenna reflector and feeder technologies (Karasawa, et al, 1997).
Consumer knowledge of broadband and digital TV is relatively high; with a majority of residential consumers both aware of and able to correctly describe these terms. The given below figure shows the awareness ratio of Broadband compared with other communication services like digital TV, 3G and digital radio (Ofcom, 2005).
Figure 3 - Awareness and Understanding of communication service terms (Courtesy: OFCOM, 2005)
Many large technology companies have a vision of a world with broadband-connected homes equipped with new Internet applications, and they prioritize their core strategies around broadband. Perhaps the most notable example is Microsoft's entry into the video console game market with the emphasis on built-in broadband capability, in order to maintain a technological leadership in the residential online gaming industry.
As for the perceived future of broadband services, there is a divide between optimists and pessimists. Some think that next generation broadband services will be universally available and exploited in the UK by 2010 and that the technology will become increasingly invisible while some others warn that digital divides will remain in availability, take-up and exploitation (Broadband Reports; Todd). The commercial development of the broadband Internet is still in its early phase, while advancement in information technology is accelerating. If leading technology companies can take broadband Internet as the core of their strategies and invest aggressively, then it is only a question of when, but not if, broadband Internet will alter the competitive landscape of the industry and also change people's life via any innovative Internet applications.
Growth in traffic and users is forcing ISPs to scale their networks’ performance and capacity by introducing gigabit and terabit routers, optical switches, dense wave division multiplexing (DWDM), and 1x/10x gigabit Ethernet (Cook, 2001). At the same time, however, they are trying to reduce operational costs and expand revenue-generating service offerings. ISPs are driven to lower costs, maximize performance, and generate revenue. Thus, the choice of where (and with whom) to transit, directly impacts these driving factors. In general, more ISPs are moving toward peering relationships to help lower costs and improve performance by minimizing the traffic flowing over larger ISP backbone networks.
This trend is being facilitated by the introduction of gigabit Ethernet (GE) and optical technologies, which enable higher-performance switching capacity. Possessing the access technology is only half of the battle for service providers. The ability to manage the network efficiently and in a timely fashion is essential in the current competitive market place. Network management can be defined as the set of operation support systems that service providers use to deploy, configure, maintain, and monitor the network and the services that are carried over it.
A report on Broadband Quality of Services (Ofcom, 2004) highlighted the factors it considered most important in terms of quality of service, namely - network efficiency, changes in customer usage patterns and customer service and billing systems. Also, according to its market research most of the Broadband users especially business users appear unwilling to switch providers for which the main reason is their satisfaction with their existing ISP.
Traffic Engineering is another challenging function of the management process for IP networks. It represents the action that the network administrator/operator should consider in order to relief a potential servicing problem before the service is affected. This may include re-homing, re-routing, load balancing, congestion control, capacity expansion, network dimensioning, and network planning. Several traffic models and network dimensioning methods for packet networks have been proposed in the literature [Fonseca and Zukerman, 1997; Onvural, 1995; Rayes, 2000] and depending upon the usage and the function of the network element, appropriate model should be chosen by the operator. For instance, traffic techniques for IP edge routers include packet classification, admission control, and configuration management whereas congestion management and congestion avoidance are typical considerations of backbone routers or switches.
The flexibility of IP broadband access networks, especially FTTx, introduces new challenges to hardware vendors as well as service providers. The most important challenge is perhaps the network and service security. Many local government agencies require service providers to implement strict security and traceability techniques to monitor subscribers (which user has which IP address) at all time. Service providers are extremely concerned about the network security and subscriber privacy. Security management also includes authorization and other essential secure communications issues. Issues related to authentication and authorization includes the robustness of the methods used in verifying an entity’s identity, the establishment of trusted domains to define authorization boundaries, and the requirements of uniqueness in namespace (Foschiano and Paggen, 2001).
From what we have seen so far it seems that ADSL will still remain the most prominent broadband technology in the near future but it will definitely face stiff competition from the wireless market as soon as there are positive results from the ongoing trials. Satellite communication for broadband is not an imminent threat to ADSL and Cable networks. As far as cable is concerned its market share should remain pretty much the same as it is now in the UK. The underlying infrastructure is a real advantage for the ADSL over its rivals. The relative infancy in research of the new technological infrastructure is a real barrier for the adoption of technologies like wireless along with cost implications of satellite access. At present there seems to be no major technological evolution going on apart from what has already been discussed in this report.
The commercial development of the broadband Internet is still in a very progressive phase, while advancement in information technology is accelerating. If leading technology companies can take broadband Internet as the core of their strategies and invest aggressively, then it is only a question of when, but not if, broadband Internet will alter the competitive landscape of the industry and also change people's life via any innovative Internet applications. Therefore, it is of paramount importance for any ISP companies to position itself wisely to take the first-mover advantage and prepare for the next broadband era.
As of now the ADSL technology seems to be the one to stay in lead for at least the next 5 years. Wireless is the most threatening technology for ADSL and CABLE in the next decade and should takeover them. The actual timing of the takeover will largely depend on the timing of success of existing trials. For the author of this report the research focus should definitely be on the Wireless technology moving from fixed wireless to mobile wireless. Read about essay types at PhDify.
The topic of my dissertation seemed easy but only at first glance - I couldn't sleep well any more. I was stressed and I felt broken. Phdify saved me from a total disaster, and now I have my PhD.
Most friends of mine encountered the same difficulties. I wrote some chapters by myself, but another chapters were moving on slowly! So, I never hesitated to ask for a help and I've got a great experience at phdify.com!
At one moment I felt an absolute despair to finish my thesis! To my luck a good friend of my gave me this site, and I understood: this is my salvation! Thanks to Phdify team I finished my thesis in time!