The global network is much bigger than you think. The Internet is bigger than any search engine imagines. All netizens know about Google, Facebook and Amazon, but there is much more behind these user-friendly and respectable websites. They are a small part of the Internet, generally known and readily available. DeepWeb and DarkWeb are another matter.

DeepWeb and DarkNet – what are they and what’s the difference?

The English term DeepWeb is a tracing copy of the Deep sea, definitions of a deep ocean environment that is practically unexplored and difficult to access.

DeepWeb – Web pages, sites, online communities and network-connected devices that are intentionally or unintentionally hidden from search engine algorithms and are accessible to a very limited number of people. In addition, DeepWeb includes databases that are automatically generated by programs for other algorithms, non-HTML content.

Strictly speaking, the content management system in which this article is printed and typeset is located in DeepWeb. The same goes for numerous confidential networks and pages. See how many personal pages are in one Gmail account and see how big DeepWeb is. Some analysts call DeepWeb as big as the public Internet, others portray it as the underwater part of the iceberg, much more massive than the visible part. It is difficult to give an accurate estimate of what is well hidden.

This is one of the reasons why the media regularly publish frightening stories about the giant DarkNet – the lower level of the Internet, hidden from the average user and inhabited by criminals. They are confusing because DarkNet is confused with the larger and, by and large, boring DeepWeb.

DarkNet (DarkWeb) – networks that operate “on top” of the Internet infrastructure. They are united by the fact that connections are established only between trusted peers in the network (peer-to-peer) without the use of dedicated servers, but with the widespread use of encryption and non-standard communication protocols. Below we will talk about the most famous and busiest of these networks.


The idea for the Freenet project belongs to the Irishman Ian Clarke, who published open source software for the decentralized exchange of encrypted files in the early 2000s. The concept of this network is outlined in Clark’s dissertation, A Distributed, Decentralized Information Storage and Retrieval System, at the University of Edinburgh in Scotland. The concept describes the organization of a data warehouse in which it is impossible to determine the source of files and stop the distribution of information.

Freenet requires each user to install a client program on their computer, as well as donate to the needs of the network in the form of a portion of disk space and a share of the Internet access channel. These resources support the functioning of the network.

Files placed in Freenet are encrypted, broken into pieces and distributed to computers – network participants, where they are stored on demand. By downloading a file, the user receives a cryptographic key, which allows reassembling the encrypted file from scattered fragments and extracting information from it.

Periodically, the network cleans itself of old, unused information. Popular files, on the other hand, continue to be copied and distributed among the participants more and more actively. This automated process keeps the distributed storage functioning, however, due to its architecture, such a network is slow, and its clients want to keep the computer always on. The size of the provided disk directly affects the speed of access to information, since this space is mainly occupied by encrypted copies of the most popular network files.

During its existence, Freenet software has been downloaded several million times, but the platform is gradually losing popularity. As a means of distributing files, Freenet has given way to the much less secure but faster and more convenient BitTorrent protocol.

However, Freenet continues to work. Moreover, its users found a way not only to exchange files, but also to post forums, microblogs and anonymous e-mail boxes on the network. Updated images of web pages are encrypted and placed in Freenet, and then downloaded at the user’s request to the personal computer.


The story behind this DarkNet is not so simple. It is known that Tor (The Onion Router) is based on the development of the US Navy research laboratory, which collaborated with the defense agency DARPA, known for fantastic military projects, from combat robots to neurointerfaces.  

Why the military needed to create a network of proxy servers that hide the identity of the Internet user is not known for certain, as well as why in 2001 the project documentation and the source code of the software were published. Since then, the US military has distanced itself from the project, but it continues to receive donations from government agencies and other philanthropists.

Despite its nebulous origins that continue to serve as the basis for conspiracy theories, the Tor software has passed several public audits and is generally considered reliable.

The central concept of this network is called onion routing. The idea is to hide the IP address of the sender’s computer by wrapping the transmitted data in several layers of encryption, turning the packets with them into something resembling a cryptographic onion.

Computers connected to Tor form a system of relay nodes. When sending data from a computer via Tor, the network randomly selects three relay nodes. Before sending, the packet is sequentially encrypted with three keys. The first is the ingress node, removes the top layer of protection and knows where to forward the packet next. The second is an intermediate node, removes the next layer of encryption from the onion and transmits the packet to the third, output relay, where the information is finally decrypted and released to the regular Internet is sent to the original addressee. The server response is encrypted in the same way as the request and is returned along the chain of relays.

As a result, none of the nodes has all the information about the data. In addition, the chains of nodes are constantly changing, so it is extremely difficult to trace the complete route of packets and find out their source.

As with Freenet, Tor is supported by volunteers who donate a fraction of their bandwidth. There is a problem with this. Anyone can create an output relay that releases unencrypted traffic to the Internet and, if they wish, examine the data leaving the Tor network. To transmit confidential information through Tor, it needs additional protection.

Over time, Tor gained support for sites that are available exclusively within the network, but this functionality remains a side effect.

Tor is in demand by residents of countries with tough telecommunications regulation, such as China. In this country, Tor serves as a way to overcome censorship. Tor is popular among journalists, human rights activists, political activists, detectives, various insiders, network trolls, scouts and, of course, various intruders. DarkNet owes the latter a bad reputation.

Popular interest in Tor arose after the famous confessions of Edward Snowden, who revealed the existence of mass electronic surveillance programs. The former CIA agent used Tor to communicate with journalists.

Today Tor is the largest anonymous network with more than 7,000 nodes scattered around the world, except perhaps Antarctica.


The last project in this overview, the Invisible Internet Project (I2P) , began development in 2003 with the efforts of a team that included Freenet developers. This time, the concept was not based on the creation of a decentralized database or a system that hides the user’s identity, but on the development of a new decentralized Internet based on the existing infrastructure. Launch an anonymous, scalable, and resilient communications environment that is independent of traditional domain name registrars, DNS services, and other vulnerable services. I2P is designed to do everything the regular Internet does, from streaming audio and video to any existing network protocols, but within a distributed network.

During the creation and development of I2P, the focus was shifted to internal communications, encryption, even more paranoid than in Tor, was planned. Packet aggregation and multilayer co-encryption (also called garlic routing) are used to make it harder to track packet paths on the network, and random content is added to packets.

According to open data, I2P is supported by about 55,000 active machines. The exact number varies with the time of day. Most of the nodes are located in Western and Eastern Europe.

Summing up

None of the listed anonymous networks are as versatile as the Internet. Each is designed to solve a specific problem and does it best.

You cannot save files in Tor and I2P, as you can in Freenet. Tor and Freenet do not work with voice calls and popular networking protocols. Freenet does not process data streams at all, as Tor and I2P do. Through I2P, you can get access to the regular Internet, but Tor specializes in this. It is possible to host a website in Tor, but I2P is designed specifically for anonymous hosting.

If you are interested in information security, using a paid VPN is enough, but achieving anonymity on the Internet is much more difficult. This requires not only becoming familiar with these three tools, but also learning a lot beyond the scope of this overview.

Anonymous networks now resemble the Internet of the early 1990s. Promising but difficult to use, slow, potentially unreliable, under development. But, who knows, perhaps technologies developed for DarkWeb and debugged by enthusiastic programmers will form the basis for a safer and more stable Internet of the next generation.


Leave a Comment