Comments None

  • ICANN community groups call for involvement in exploration of GDPR’s impact
  • Two gTLDS shut off public access to WHOIS data citing clash with EU law
  • Tiered access system likely, with “legitimate interest” required to access data

The Internet Corporation for Assigned Names and Numbers (ICANN) has come under fire for excluding the full community in its exploration of the General Data Protection Regulation’s (GDPR’s) impact on the WHOIS system. ICANN has stated that the regulation could impact its ability to maintain a single global WHOIS system, and now this week two generic top-level domains withdrew public access to registrant information. Trademark counsel should follow the issue closely, as it could lead to the end of WHOIS in its current form and the ability to easily (and cost-effectively) identify the owners of infringing domains. Whatever the outcome, policing activities are set to get harder.

The GDPR, adopted in 2016, becomes enforceable on May 25 2018 uniformly across the European Union. The aim of the regulation is to protect EU citizens and residents from privacy and data breaches, and it therefore requires explicit consent to be obtained for the collection – and use, including publication – of personal data. Crucially, while an EU regulation, it applies to all companies processing and holding the personal data of subjects residing in the EU, regardless of the company’s location. Therefore, ICANN and non-EU registries and registrars are impacted.

For ICANN, the GDPR is set to have a significant impact on WHOIS, with the organisation acknowledging: “Since GDPR will likely effect how WHOIS data is displayed, it could impact our ability to maintain a single global WHOIS system. In turn, this will likely impact either ICANN’s agreements or its ability to enforce contractual compliance of its agreements using a single and consistent approach.” In short, on the face of it, the GDPR could result in WHOIS as we know it coming to an end.

To explore the possible ramifications, ICANN established a Compliance Task Force and in August the Business Constituency (BC) expressed concern with the direction of its efforts. Noting that the task force does not “accommodate full participation from the ICANN community”, the letter argued for the development of a document that defends WHOIS and how the system serves the public interest. It urged the development of “an action plan that will create a narrative to present to regulators that defends WHOIS, and examines how it is consistent with the GDPR”. The Intellectual Property Constituency (IPC) subsequently echoed this call, writing to ICANN earlier this month to urge that it update its plan by focusing on the public interests for maintaining the WHOIS system as well as the centrality of WHOIS to the DNS, “rather than starting out with the assumption that WHOIS is somehow incompatible with the GDPR”.

More recently, on October 13, the registries and registrars stakeholder groups told ICANN that the requirements under GDPR and contracts with ICANN stand in conflict with each other, and expressed frustration at the seeming lack of progress on efforts to reconcile these differences. They too urged an inclusive approach to the development of new policies, threatening: “If we are unable to work together to identify a shared solution, contracted parties must necessarily develop their own approaches to dealing with the conflicts between GDPR and their contractual requirements, which may or may not align with each other.”

Days later, ICANN published a 17 page memo it received from European law firm Hamilton, which it had commissioned to provide an independent legal analysis. The firm notes that “from an outside perspective, the purposes of the data processing within the WHOIs services are currently not entirely clear and transparent”. However, from a data controller perspective it suggests that ICANN, registries and registrars “are all considered to be joint controllers” – meaning that the respective roles do need to be set out (for instance in relevant agreements). As to the current level of consent given by registrants, it notes that this will need to be reviewed to ensure that they are unambiguous, informed and voluntary. And while contact details are required for execution of the agreement, it suggests that making personal information public may not be necessary. In short, the status quo is not GDPR compliant.

Interestingly Hamilton did note that “the use of WHOIS data to investigate fraud, consumer deception, intellectual property violations, or other violations of law” could qualify as a legitimate interest, although this would have to be weighed against the rights and freedoms of the data subject.

Hamilton’s overall conclusion, though, is that “it will not be possible to claim legitimate interest as a legal ground for processing of personal data as currently performed through the WHOIS services on an unchanged basis”. While the current system would be acceptable if based on consent, it notes that this would be a “complex solution”, dependent on registrants providing (and withdrawing) their consents.

Last week, the Non-Commercial Stakeholders Group also waded in on the issue, arguing that the BC and IPC’s positions “involve a misread of the GDPR which, at best, underestimates the risks associated with non-compliance and, at worst, entirely dismisses the public interest in respecting the fundamental right to privacy… It is our view, that in the case of the intellectual property community, they are externalising the risks from their actions onto ICANN and the contracted parties…. ICANN owes no obligation to, and is not responsible for, how or why other parties use WHOIS. To do so would be reckless and expose ICANN, as the data controller, to real liability under the GDPR”.

While ICANN continues its exploration, and the debate escalates, the spectre of contracted parties going it alone in a bid to avoid liability is already becoming reality. Yesterday Domain Incite’s Kevin Murphy reported that two Dutch geo-gTLDs – ‘.amsterdam’ and ‘.frl’ – are refusing to provide public access to WHOIS information, arguing that the provisions in their Registry Agreements are “null and void” under Dutch and European Union law.

One possible outcome to enable compliance is a tiered system that enables registrars to keep on record the details of registrants (that information being required to perform the agreement), this information then being made available to law enforcement if deemed legitimate interest. This is one model suggested by Hamilton and – at this stage – appears to be the more likely outcome. Murphy notes that this is the approach now taken by the ‘.amsterdam’ and ‘.frl’ TLDs, which will provide WHOIS access to vetted individuals such as law enforcement officials. However, if adopted universally, this approach would add a layer of complexity to the policing activities of rights owners, who would have to go through law enforcement authorities to request action against infringing sites.

Under the GDPR, fines of up to €20 million or 4% of group turnover (whichever is larger) can be levied for breaches by both data controllers and processors. The perceived wisdom is that the 4% penalty is unlikely to be widely implemented but that – in order to demonstrate the seriousness of the regulation – an early high-profile party could find itself facing such a fine. ICANN would certainly represent such a high profile name and the spectre of heavy fines are no doubt weighing heavily on both the organisation and its contracted parties. At next week’s ICANN meeting in Abu Dhabi, the GDPR is likely to be the focus of in-depth and emotive discussion, and further jostling for influence.

For rights holders it is a topic worth following because, depending on the approach taken by ICANN, the introduction of GDPR could be a game-changer for both the WHOIS system and the ability of trademark counsel to easily (and cost-effectively) identify the owners of infringing domains and website content.

[Culled from]


Comments None

It is easy to get a server. Anyone can setup a machine in his basement and start publishing websites. Furthermore, most web hosting companies offer leased servers and virtual private servers at affordable prices. All of this means that someone with absolutely no experience can start a server, publish websites, or even host other people’s sites.

Fortunately, there are plenty of forums and online documentation to help newbie system administrators get started. If you happen to be one of them or even if you are not, there several security threats to Internet-connected servers that you should be aware of and know how to prevent and mitigate. These 10 threats are common ones that attackers like to use to either gain access to your server or bring it to its knees.

1. Brute Force Attack
In a brute force attack, the intruder attempts to gain access to a server by guessing a user password (usually the root administrator) through the SSH server, Mail server, or other service running on your system. The attacker will normally use software that will check every possible combination to find the one that works. Brute force detection software will alert you when multiple failed attempts to gain access are in progress and disable access from the offending IP address.

2. Open Relay
A Mail Transfer Agent (MTA) normally uses an SMTP server to send email from your server’s users to people around the world. With an open relay, anyone can use your SMTP server, including spammers. Not only is it bad to give access to people who send spam, it could very well get your server placed on a DNS blacklist that some ISPs will use to block mail from your IP. It is very easy to close an open relay. Just follow the documentation for your MTA.

3. Botnet
Attackers use botnets to automatically run and distribute malicious software on “agent” servers. They then use the agent machines to attack or infect others. Because all of this can be done automatically without user intervention, botnets can spread very quickly and be deadly for large networks. They are commonly used in DDoS attacks and spam campaigns.

4. DoS
DoS stands for Denial of Service, and is a technique attackers will use to effectively shut off access to your site. They accomplish this by increasing traffic on your site so much that the victim’s server becomes unresponsive. While some DoS attacks come from single attackers, others are coordinated and are called Distributed Denial of Service (DDoS) attacks. Often times, the users of computers executing a DDoS do not even know their computers are being used as agents.

5. Cross-site Scripting
Cross-site scipting or XSS is a technique that makes use of vulnerabilities in web applications. According to UK dedicated hosting server specialists at, the vulnerability allows the attacker to inject code in a server-side script that they will use to execute malicious client-side scripts or gather sensitive data from the user. You can fix most XSS problems by using scanner software to detect vulnerabilities and then fix whatever you find.

6. SQL Injection
Like XSS, SQL injection requires a vulnerability to be present in the database associated with a web application. The malicious code is inserted into strings that are later passed to the SQL server, parsed, and executed. As with other vulnerability-dependent attacks, you can prevent it by scanning for problem code and fixing it.

7. Malware
Malware can take many forms, but as the name implies, it is malicious software. It can take the form of viruses, bots, spyware, worms, trojans, rootkits, and any other software intended to cause harm. In most cases, malware is installed without the user’s direct consent. It may attack the user’s computer and/or attack other computers through the user’s own system. Having proper firewall and security software protection can usually prevent malware from spreading.

8. Unpatched Software
Most threats to a server can be prevented simply by having up-to-date, properly-patched software. All server operating system vendors and distributions publish security updates. By installing them on your system in a timely manner, you prevent attackers from using your server’s own vulnerabilities against it.

9. Careless Users
The number one, most prevalent threat to a server’s security is user’s carelessness. If you or your users have passwords that are easy to guess, poorly written code, unpatched software, or a lack of security measures like anti-virus software, you are just asking for trouble. By enforcing strong security practices and secure authentication, you can lessen or even eliminate most threats.

Every day, attackers conspire to take down applications and steal data, leaving data centre infrastructure in the crosshairs. Storing an organisation’s most valuable and most visible assets – its web, DNS, database and email servers – data centres have become the number one target of cyber criminals, hacktivists and state-sponsored attackers.
Whether seeking financial gain, competitive intelligence or notoriety, attackers are carrying out their assaults using a range of weapons. The top five most dangerous threats to a data centre are:
1. DDoS attacks
2. Web application attacks
3. DNS infrastructure: attack target and collateral damage
4. SSL-induced security blind spots
5. Brute force and weak authentication
To counter these threats, organizations need a solution that can lock down their data centres. Otherwise they risk a high-profile data breach, downtime or even brand damage.

1. DDoS Attacks
Servers are a prime target for distributed denial of service (DDoS) attacks aimed at disrupting and disabling essential Internet services. While web servers have been at the receiving end of DDoS attacks for years, attackers are now exploiting web application vulnerabilities to turn web servers into ‘bots’. They use these captive servers to attack other websites.
By leveraging web, DNS and NTP servers, attackers can amplify the size and the strength of DDoS attacks. While servers will never replace traditional PC-based botnets, their greater compute capacity and bandwidth enable them to carry out destructive attacks – one server could equal the attack power of hundreds of PCs.
With more and more DDoS attacks launched from servers, it’s not surprising that the size of attacks has grown sharply. Between years 2011 and 2013, the average size of DDoS attacks escalated surged from 4.7 to 10 Gbps. Read more Righting wrongs: preventing data breaches before they happen
Worse, there has been the staggering increase in the average packets per second in typical DDoS attacks; attack rates skyrocketed 1,850 percent to 7.8 Mpps between 2011 and 2013. At the current trajectory, DDoS attacks could reach 95 Mpps in 2018 – powerful enough to incapacitate most standard networking equipment.
DDoS for hire services, often called ‘booters’, have mushroomed too. Many advertise their capabilities in YouTube videos and forum posts. While some masquerade as ‘stress testing’ services, many boldly claim to ‘take enemies offline’ or ‘eliminate competitors’. Such services enable virtually any individual or organisation to execute a DDoS attack.

2. Web Application Attacks
Cyber criminals also launch web attacks like SQL injection, cross-site scripting (XSS) and cross-site request forgery (CSRF), trying to break into applications and steal data for profit. Increasingly, attackers target vulnerable web servers and install malicious code in order to transform them into DDoS attack sources.
Read more: A World without Identity and Access Governance
Some 98 percent of all applications currently have or have had vulnerabilities, and the median number of vulnerabilities per application was 20 in 2014, according to a 2015 Trustwave Global Security Report.
Today’s most dangerous application threats, like SQL injection and cross-site scripting, aren’t new but they are still easy to perform and lethally effective. Tools like the Havij SQL injection tool enable hackers to automate their attack processes and quickly exploit vulnerabilities.
The recent wave of web attacks on CMS applications has also revealed a gaping hole in the strategy to lock down applications by writing secure code. Because CMS applications are usually developed by third parties, organizations can’t rely on the protection of secure coding. In 2013, 35 per cent of all breaches were caused by web attacks. More than ever, organizations need a proactive defense to block web attacks and ‘virtually patch’ vulnerabilities.

3. DNS Infrastructure
DNS servers have become a top attack target for two reasons. First, taking DNS servers offline is an easy way for attackers to keep thousands or millions of Internet subscribers from accessing the Internet. If attackers incapacitate an ISP’s DNS servers, they can prevent the ISP’s subscribers from resolving domain names, visiting websites, sending email and using other vital Internet services.
Secondly, attackers can exploit DNS servers to amplify DDoS attacks. In DNS reflection attacks, attackers spoof the IP address of their real attack target. They send queries that instruct the DNS server to recursively query many DNS servers or to send large responses to the victim. As a result, powerful DNS servers drown the victim’s network with DNS traffic. Even when DNS servers are not the ultimate target of the attack, they can still suffer downtime and outages as the result of a DNS reflection attack.

4. SSL-induced blind spots
To prevent the continuous stream of malware and intrusions in their networks, enterprises need to inspect incoming and outgoing traffic for threats. Unfortunately, attackers are increasingly turning to encryption to evade detection.

With more and more applications supporting SSL – over 40 percent of applications can use SSL or change ports – SSL encryption represents an enormous crater that malicious actors can exploit.
While many firewalls, intrusion prevention and threat prevention products can decrypt SSL traffic, they can’t keep pace with growing SSL encryption demands. The transition from 1024- to 2048-bit SSL keys has burdened security devices because 2048-bit certificates require approximately 6.3 times more processing power to decrypt. With SSL certificate key lengths continuing to increase, many security devices are collapsing under increased decryption demands.

For end-to-end security, organisations need to inspect outbound SSL traffic originating from internal users, and inbound SSL traffic originating from external users to corporate-owned application servers to eliminate the blind spot in corporate defenses.

Clearly organisations need a high-powered solution to intercept and decrypt SSL traffic, offloading intensive SSL processing from security devices and servers.

Princess Austen (BSc.)
CS & Strategic Marketing Dept.
LekkiHost Limited


Comments None


Do you have a business and don’t have a website? If you said yes, it’s almost as if your business doesn’t exist. In this modern era, people and companies are on the internet for information. Why do you think people visit a website? It’s primarily to find information. And if you’re in the business world, information is critical. You need to have a website for your customers. It needs to contain information about what you can do for them.

A website is a powerful sales tool and one that allows you to address your customers’ concerns, give them the information they need to make a decision and create compelling calls to action. Sure, you can keep placing ads in the Yellow Pages and hope that word-of-mouth generates on its own…or you can build something that inspires it to happen. Your website is your home turf where people can go to seek out trusted information about your company and engage with you on a more personal level. Use it to build confidence in your brand and to give customers important buying information (and incentives).



Have you ever advertised your business through various forms such as printed media, radio, television or by other means? It’s expensive! Investing in advertising is necessary, but it takes a lot of money. Having a website will make promoting your company less expensive. Many versions of offline advertising available on the internet are sometimes free.


A website is more environmental friendly when it comes to advertising and marketing. There are lots of ways to advertise your products or services through the internet. One example is Facebook ads, an advertising feature offered through Facebook. Another one is called SEO. This is a major advantage for your business. Having a good SEO service provider can boost the ranking of your website which quickly results in increased sales and higher profits.


Having a website will be more convenient for your customers and leads. Make it easy for your customers to purchase from you! Many will be more likely to visit your website, rather than driving a car to your physical location and browsing for your products. From a customer’s point of view, it’s better for them if they don’t have to ask anything. They can just find what they’re looking for on your online site.


Most businesses have local popularity, but what about potential customers outside their city? A website can help you generate more customers. Not just outside your city, but worldwide. The internet offers a global community. With a website, your business will be visible around the world.


Have you ever experienced having to turn customers away because it’s closing time? Well, you don’t have to close the doors of your website. An online site can be visited any time of the day or night. People will look to your site instead of going to your shop because it is more accessible. Just make sure to post enough information about your products and services.


Did you know that if you own a website, you can actually track everything that is happening on it? You can even look for information that will tell you how many people visited your site, or how many people messaged or emailed you. You can access the progress of your website and view all its pages. You can even make an update anytime, making it much less expensive than printed material.


Smart business owners create a blog page for their company. Having a blog to post fresh content will keep your website attractive and fresh.


Links are very important to viral marketing. If you have many sites linking to you, it is like spreading the word about your company all around the world. If you have a good website with good content related to information, products or services, people are more likely to link your website to theirs. This means they recognize your website as valuable.


Having a website can build better relationships with your customers. You can send messages instantly to your customers through email. Also, your customers can review your products online and can also leave feedback for you and your business. It’s best to always send your customer a message. This is essential for building a good relationship with them. You can even give them more information about your business through messages or emails.


If you are a business owner, more visitors leads to more potential sales. That’s how your website will help you. You can drive more people to your site by consistently updating and promoting the contents of your site. The more informative your site is, the greater the possibility of increasing your sales.


A website gives you the opportunity to prove your credibility. You have to tell your customers why you deserve their trust through your website. This can earn positive feedback for your service and products. Also, your website serves as a place for a potential investor to explore what your business is about and what it can do in the future.


What do you think is the difference between client and a customer? Well, a customer is the one who walks in and buys something and that’s it. A client is your regular customer. He is buying your products or services daily or contractually. Having a website gives you a chance to gain more clients that can help your business grow.


You stop being invisible. I’m not trying to be flippant, but by creating a website you stop being invisible to the people trying to find you online. More and more studies are telling us about the ROBO effect where customers are learning to research online before buying offline. They’re typing their problems or needs into the search engine of their choice and are researching the companies that appear for those queries. If you don’t have a Web presence, there’s no chance of you showing up and you never even enter into their thought process. In these modern times of flying information at speed of light, you can’t afford to be invisible.

Princess Austen (BSc.)
CS & Strategic Marketing Dept.
LekkiHost Limited


Comments None

The ICANN policy can be found at

The Transfer Policy allows Registrants to delegate giving their consent to a Change of Registrant to a third-party on their behalf. The policy defines such third-party as a Designated Agent (DA):

1.2 “Designated Agent” means an individual or entity that the Prior Registrant or New Registrant explicitly authorizes to approve a Change of Registrant on its behalf.

In practical terms, this means that a Registrant can give authority to LekkiHost to confirm a Change of Registrant. When LekkiHost is enabled to act as a Designated Agent, the Registrant will not need to receive or confirm an email in order for a change to proceed. Instead, LekkiHost will always auto-approve any Change of Registrant.

More details here:-


Comments None

The Internet in the United States grew out of the ARPANET, a network sponsored by the Advanced Research Projects Agency of the U.S. Department of Defense during the 1960s. The Internet in the United States in turn provided the foundation for the world-wide Internet of today.

Internet access in the United States is largely provided by the private sector and is available in a variety of forms, using a variety of technologies, at a wide range of speeds and costs. In 2014, 87.4% of Americans were using the Internet, which ranks the U.S. 18th out of 211 countries in the world. A large number of people in the US have little or no choice at all on who provides their internet access. The country suffers from a severe lack of competition in the broadband business. Nearly one-third of households in the United States have either no choice for home broadband Internet service, or no options at all.

Internet top-level domain names specific to the U.S. include .us, .edu, .gov, .mil, .as (American Samoa), .gu (Guam), .mp (Northern Mariana Islands), .pr (Puerto Rico), and .vi (U.S. Virgin Islands). Many U.S.-based organizations and individuals also use generic top-level domains (.com, .net, .org, .name, etc).


Access to the Internet can be divided into dial-up and broadband access. Around the start of the 21st century, most residential access was by dial-up, while access from businesses was usually by higher speed connections. In subsequent years dial-up declined in favor of broadband access. Both types of access generally use a modem, which converts digital data to analog for transmission over a particular analog network (ex. the telephone or cable networks).

Dial-up access is a connection to the Internet through a phone line, creating a semi-permanent link to the Internet. Operating on a single channel, it monopolizes the phone line and is the slowest method of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas because it requires no infrastructure other than the already existing telephone network. Dial-up connections typically do not exceed a speed of 56 Kbit/s, because they are primarily made via a 56k modem.

Broadband access includes a wide range of speeds and technologies, all of which provide much faster access to the Internet than dial-up. The term “broadband” once had a technical meaning, but today it is more often a marketing buzzword that simply means “faster”. Broadband connections are continuous or “always on” connections, without the need to dial and hang-up, and do not monopolize phone lines. Common types of broadband access include DSL (Digital Subscriber Lines), Cable Internet Access, Satellite Internet Access, mobile broadband via cell phones and other mobile devices among many others. In 2015, the United States Federal Communications Commission (FCC) defined broadband as any connection with a download speed of at least 25 Mbit/s and an upload speed of at least 3 Mbit/s, though the definition has used a slower speed in the past.

The percentage of the U.S. population using the Internet grew steadily through 2007, declined slightly in 2008 and 2009, growth resumed in 2010, and reached its highest level so far (81.0%) in 2012, the latest year for which data is available. 81.0% is slightly above the 2012 figure of 73% for all developed countries. Based on these figures the U.S. ranked 12th out of 206 countries in 2000, fell to 31st out of 209 by 2010, and was back up slightly to 28th out of 211 in 2012. In 2012 the U.S. figure of 81.0% was similar to those of France (83.0%), Belgium (82.0%), Australia (82.3%), Austria (81.0%), Slovakia (80%), Kuwait (79.2%), and Japan (79.1%). The figures for the top ten countries in 2012 ranged from 91.0% for Finland to 96.9% for the Falkland Islands.

Internet usage in the United States varies widely from state to state. For example, in the U.S. overall in 2011, 77.9% of the population used the Internet. But in that same year (2011), there was a large gap in usage between the top three states – Washington (80.0%), New Hampshire (79.8%) and Minnesota (79.0%) – and the bottom three states – Mississippi (59.0%), New Mexico (60.4%) and Arkansas (61.4%).

Internet Usage in the US from year 200o to 2012

Fixed (wired) and wireless broadband penetration have grown steadily, reaching peaks of 28.0% and 89.8% respectively in 2012. These rates place the U.S. above the world average of 25.9% for fixed broadband in developed countries and well above the average of 62.8% for wireless broadband in OECD countries. Wireless broadband subscriptions in the U.S. are primarily mobile-cellular broadband. Because a single Internet subscription may be shared by many people and a single person may have more than one subscription, the penetration rate will not reflect the actual level of access to broadband Internet of the population and penetration rates larger than 100% are possible.

A 2013 Pew study on home broadband adoption found that 70% of consumers have a high-speed broadband connection. About a third of consumers reported a “wireless” high-speed connection, but the report authors suspect that many of these consumers have mistakenly reported wireless connections to a wired DSL or cable connection. Another Pew Research Center survey, results of which were published on February 27, 2014, revealed 68% of American adults connect to the Internet with mobile devices like smartphones or tablet computers. The report also put Internet usage by American adults as high as 87%, while young adults aged between 18 and 29 were at 97%.

Global bandwidth concentration: U.S. lost its historical leadership in 2011. In 2014, 3 countries host 50 % of globally installed bandwidth potential; 10 countries almost 75%.

The U.S. lost its global leadership in terms of domestically installed bandwidth in 2011, being replaced by China, which hosts more than twice as much national bandwidth potential in 2014 (China: 29% versus US: 13% of the global total).

In measurements made between April and June 2013 (Q2), the United States ranked 8th out of 55 countries with an average connection speed of 8.7 Mbit/s. This represents an increase from 14th out of 49 countries and 5.3 Mbit/s for January to March 2011 (Q1). The global average for Q2 2013 was 3.3 Mbit/s, up from 2.1 Mbit/s for Q1 2011. In Q2 2013 South Korea ranked first at 13.3 Mbit/s, followed by Japan at 12.0 Mbit/s, and Switzerland at 11.0 Mbit/s.

Internet taxes and Taxation of Digital Goods

In 1998, the American Federal Internet Tax Freedom Act halted the expansion of direct taxation of the Internet that had begun in several states in the mid-1990s. The law, however, did not affect sales taxes applied to online purchases which continue to be taxed at varying rates depending on the jurisdiction, in the same way that phone and mail orders are taxed.

The absence of direct taxation of the Internet does not mean that all transactions taking place online are free of tax, or even that the Internet is free of all tax. In fact, nearly all online transactions are subject to one form of tax or another. The Internet Tax Freedom Act merely prevents states from imposing their sales tax, or any other kind of gross receipts tax, on certain online services. For example, a state may impose an income or franchise tax on the net income earned by the provider of online services, while the same state would be precluded from imposing its sales tax on the gross receipts of that provider.

Net neutrality in the United States

As a practical matter, there is a degree of net neutrality in the United States, in that telecommunications companies rarely offer different rates to broadband and dial-up Internet consumers based on content or service type. However, there are no clear legal restrictions against these practices. Internet access is categorized under U.S. law as an information service, and not a telecommunications service, and thus has not been subject to common carrier regulations.

Five failed attempts have been made to pass network neutrality bills in Congress. Each of these bills sought to prohibit Internet service providers from using various variable pricing models based upon the user’s Quality of Service level. Described as tiered service in the industry and as price discrimination by some economists, typical provisions in the bill state “[Broadband service providers may] only prioritize…based on the type of content, applications, or services and the level of service purchased by the user, without charge for such prioritization”.

On August 5, 2005, the FCC reclassified some services as information services rather than telecommunications services, and replaced common carrier requirements on them with a set of four less-restrictive net neutrality principles. These principles, however, are not FCC rules, and therefore not enforceable requirements. Actually implementing the principles requires either official FCC rule-making or federal legislation.

On June 6, 2010, the United States Court of Appeal for the District of Columbia in Comcast Corp. v. FCC ruled that the FCC lacks the authority as an information service, under the ancillary statutory authority of Title One of the Communications Act of 1934, to force Internet service providers to keep their networks open, while employing reasonable network management practices, to all forms of legal content. On December 21, 2010, the FCC approved the FCC Open Internet Order banning cable television and telephone service providers from preventing access to competitors or certain web sites such as Netflix. The rules would not keep ISPs from charging more for faster access.

On February 26, 2015, the FCC’s Open Internet rules went into effect when the FCC designated the Internet as a telecommunications tool and applied to it new “rules of the road”.

“[Open Internet Rules are] designed to protect free expression and innovation on the Internet and promote investment in the nation’s broadband networks. The Open Internet rules are grounded in the strongest possible legal foundation by relying on multiple sources of authority, including: Title II of the Communications Act and Section 706 of the Telecommunications Act of 1996. As part of this decision, the Commission also refrains (or “forbears”) from enforcing provisions of Title II that are not relevant to modern broadband service. Together Title II and Section 706 support clear rules of the road, providing the certainty needed for innovators and investors, and the competitive choices and freedom demanded by consumers.

The new rules apply to both fixed and mobile broadband service. This approach recognizes advances in technology and the growing significance of mobile broadband Internet access in recent years. These rules will protect consumers no matter how they access the Internet, whether on a desktop computer or a mobile device.”

In summary the new rules are as follows:

  • No blocking: broadband providers may not block access to legal content, applications, services, or non-harmful devices.

  • No throttling: broadband providers may not impair or degrade lawful Internet traffic on the basis of content, applications, services, or non-harmful devices.

  • No paid prioritization: broadband providers may not favour some lawful Internet traffic over other lawful traffic in exchange for consideration of any kind—in other words, no “fast lanes”. This rule also bans ISPs from prioritizing content and services of their affiliates.


The strong protections for freedom of speech and expression against federal, state, and local governments’ censorship are rooted in the First Amendment to the United States Constitution. These protections extend to the Internet and as a result very little government mandated technical filtering occurs in the U.S. Nevertheless, the Internet in the United States is highly regulated, supported by a complex set of legally binding and privately mediated mechanisms.

After a decade and half of ongoing contentious debate over content regulation, the country is still very far from reaching political consensus on the acceptable limits of free speech and the best means of protecting minors and policing illegal activity on the Internet. Gambling, cyber security, and dangers to children who frequent social networking sites—real and perceived—are important ongoing debates. Significant public resistance to proposed content restriction policies have prevented the more extreme measures used in some other countries from taking hold in the U.S.

Public dialogue, legislative debate, and judicial review have produced filtering strategies in the United States that are different from those found in most of the rest of the world. Many government-mandated attempts to regulate content have been barred on First Amendment grounds, often after lengthy legal battles. However, the government has been able to exert pressure indirectly where it cannot directly censor. With the exception of child pornography, content restrictions tend to rely more on the removal of content than blocking; most often these controls rely upon the involvement of private parties, backed by state encouragement or the threat of legal action. In contrast to much of the rest of the world, where ISPs are subject to state mandates, most content regulation in the United States occurs at the private or voluntary level.

Government policy and programs

With the advent of the World Wide Web, the commercialization of the Internet, and its spread beyond use within the government and the research and education communities in the 1990s, Internet access became an important public policy and political issue.

National Information Infrastructure and High Performance Computing and Communication Act of 1991

The High Performance Computing and Communication Act of 1991 (HPCA), Pub.L. 102–194, built on prior U.S. efforts toward developing a national networking infrastructure, starting with the ARPANET in the 1960s and the funding of the National Science Foundation Network (NSFnet) in the 1980s. It led to the development of the National Information Infrastructure and included funding for a series of projects under the titles National Research and Education Network (NREN) and High-Performance Computing and Communications Initiative which spurred many significant technological developments, such as the Mosaic web browser, and the creation of a high-speed fiber optic computer network. The HPCA provided the framework for the transition of the Internet from a largely government sponsored network to the commercial Internet that followed.

Universal Service Fund and Telecommunications Act of 1996

Universal service is a program dating back to early in the 20th century with a goal to encourage/require the interconnection of telephone networks operated by different providers. Over time this grew into the more general goal of providing telephone service to everyone in the United States at a reasonable price. When Congress passed the Telecommunications Act of 1996 it provided for the creation of a Universal Service Fund to help meet the challenges and opportunities of the digital information age. The Universal Service Fund (USF) was established in 1997 by the Federal Communications Commission (FCC) to implement the goals of the Telecommunications Act.

The Telecommunications Act requires all telecommunications companies to make equitable and non-discriminatory contributions to the USF. Under the supervision of the FCC, the Universal Service Administrative Company (USAC), is responsible for allocating money from the central fund to four programs: High Cost, Low Income, Rural Health Care, and Schools and Libraries (E-rate). These programs are designed to promote the availability of quality services at just, reasonable, and affordable rates.

Increase access to advanced telecommunications services throughout the Nation

  • Advance the availability of such services to all consumers, including those in low income, rural, insular, and high cost areas at rates that are reasonably comparable to those charged in urban areas;
  • Increase access to telecommunications and advanced services in schools, libraries and rural health care facilities; and
  • Provide equitable and non-discriminatory contributions from all providers of telecommunications services to the fund supporting universal service programs.

Telecommunications companies may, but are not required to, charge their customers a fee to recover the costs of contributing to the Universal Service fund. Consumers may see this reflected in a line-item charge labeled “Universal Service” on telecommunications bills. The amount of this charge, if any, and the method used to collect the fee from consumers is determined by the companies and is not mandated by the FCC.

In October 2011 the FCC voted to phase out the USF’s high-cost program that has been subsidizing voice telephone services in rural areas by shifting $4.5 billion a year in funding over several years to a new Connect America Fund focused on expanding broadband deployment.


More formally known as the Schools and Libraries Program, the E-Rate is funded from the Universal Service Fund. The E-Rate provides discounts to K-12 schools and libraries in the United States to reduce the cost of installing and maintaining telecommunications services, Internet access, and internal connections. The discounts available range from 20% to 90% depending on the poverty level and urban/rural status of the communities where the schools and libraries are located.

There has been a good deal of controversy surrounding the E-Rate, including legal challenges from states and telecommunications companies. The impact of the program is hard to measure, but at the beginning of 2005 over 100,000 schools had participated in the program. Annual requests for discounts are roughly three times the $2.25 billion that is available, so while all eligible schools and libraries receive some discounts, some do not receive all of the discounts to which they are entitled under the rules of the program.

Rural Health Care Program

Like the E-Rate, the Rural Health Care Program (RHC) is funded from the Universal Service Fund. It provides funding to eligible health care providers for telecommunications services, including broadband Internet access, necessary for the provision of health care. The goal of the program is to improve the quality of health care available to patients in rural communities by ensuring that eligible health care providers have access to affordable telecommunications services, most often to implement “tele-health and tele-medicine” services, typically a combination of video-conferencing infrastructure and high speed Internet access, to enable doctors and patients in rural hospitals to access specialists in distant cities.

Over $417 million has been allocated for the construction of 62 statewide or regional broadband telehealth networks in 42 states and three U.S. territories under the Rural Health Care Pilot Program.

The Healthcare Connect Fund (HCF) is a new component of the Rural Health Care Program. The HCF will provide a 65 percent discount on eligible expenses related to broadband Internet connectivity to both individual rural health care providers (HCPs) and consortia, which can include non-rural HCPs (if the consortium has a majority of rural sites). Applications under the new program will be accepted starting in late summer 2013 with funding beginning on January 1, 2014. Discounts for traditional telecommunications will continue to be available under the existing RHC Telecommunications Program.

Rural broadband and advanced telecommunications

The Rural Utilities Service of the U.S. Department of Agriculture oversees several programs designed to bring the benefits of broadband Internet access and advanced telecommunications services to underserved areas in the U.S. and its territories:

  • Farm Bill Broadband Loan Program: Provides loans for funding the costs, on a technology neutral basis, of construction, improvement, and acquisition of facilities and equipment to provide broadband service to eligible rural communities.
  • Recovery Act Broadband Initiatives Program (BIP): A one-time program that is now closed, the BIP provided grants and loans to provide access to broadband services.
  • Community Connect Program: Provides grants to assist rural communities, expand, construct, purchase, or lease facilities and services to deploy expanded broadband Internet access to all residential and business customers located within a service area and all participating critical community facilities, including funding for up to ten computer access points to be used in a community center.
  • Distance Learning and Telemedicine Loan and Grant Program: Provides grants and loans to support acquisition of advanced telecommunications technologies, instructional programming, and technical assistance to provide enhanced learning and health care opportunities for rural residents.
  • Telecommunications Infrastructure Loan Program: Provides long-term direct and guaranteed loans to qualified organizations for the purpose of financing the improvement, expansion, construction, acquisition, and operation of telephone lines, facilities, or systems to furnish and improve telecommunications service in rural areas. All facilities financed must be capable of supporting broadband services.

American Recovery and Reinvestment Act of 2009

The 2009 Stimulus Bill, as it is commonly termed, was enacted by the 111th United States Congress and signed into law by President Barack Obama on February 17, 2009. The bill provides funding for broadband grant and loan programs:

  • $4.7 billion to create the Broadband Technology Opportunities Program within the National Telecommunications and Information Administration (NTIA) of the Department of Commerce to bring broadband to un-served and underserved areas and to facilitate broadband use and adoption.
  • $2.5 billion to be distributed by the Department of Agriculture to help bring broadband to rural areas.
  • Required the Federal Communications Commission (FCC) to develop a national broadband plan within one year.

National Broadband Plan (United States)

Internet access has become a vital tool in development and social progress since the start of the 21st century. As a result, Internet penetration and, more specifically, broadband Internet penetration rates are now treated as key economic indicators. The United States is widely perceived as falling behind in both its rate of broadband Internet penetration and the speed of its broadband infrastructure.

For all of these reasons, there were calls for the U.S. to develop, adopt, fund, and implement a National Broadband Plan. The difficulty of successfully designating and distributing government funds in order to increase Internet access (particularly via broadband) are central limiting factors in the development of such a policy, but proponents believe that establishing a national plan is necessary for social and economic progress. Those demanding a national broadband policy argue that such a policy is the best method by which the United States could achieve “universal availability and adoption of truly high-speed access”.

The Federal Communications Commission (FCC) published a National Broadband Plan in March 2010, after first soliciting public comments from April 2009 through February 2010. The goals of the plan as described on are:

  • At least 100 million U.S. homes should have affordable access to actual download speeds of at least 100 megabits per second and actual upload speeds of at least 50 megabits per second by the year 2020.

  • The United States should lead the world in mobile innovation, with the fastest and most extensive wireless networks of any nation.

  • Every American should have affordable access to robust broadband service, and the means and skills to subscribe if they so choose.

  • Every American community should have affordable access to at least one gigabit per second broadband service to anchor institutions such as schools, hospitals, and government buildings.

  • To ensure the safety of the American people, every first responder should have access to a nationwide, wireless, interoperable broadband public safety network.

  • To ensure that America leads in the clean energy economy, every American should be able to use broadband to track and manage their real-time energy consumption.


← Older Newer →