From New Media to The Web
AlphaWorld in February 1998 "Satellite" maps of a growing city in
AlphaWorld, a huge 3D multi-user virtual world run by ActiveWorlds. The
maps were produced by Roland Vilett at different snap-shots over the past
five years and by comparing them you can clearly see the rapid urban development
in this particular corner of cyberspace. |
Dialog, virtual meetings, one2one to many2many.
Virtual communities, Networked Virtual Environment, or NVE
|
French Web site map (UREC) |
January 94 |
20 |
July 94 |
100 |
|
March 95 |
287 |
|
September 95 |
598 |
|
January 96 |
1325 |
|
May 96 |
2587 |
|
16th december 96 |
4987 |
|
2nd october 97 |
end of the service |
ftp://ftp.nw.com/zone/WWW-9601/top.html
and http://www.genmagic.com/Internet/Trends/index.html
ftp://ftp.nw.com/zone/WWW-9601/top.html and http://www.genmagic.com/Internet/Trends/index.html
ftp://ftp.nw.com/zone/WWW-9601/top.html
and http://www.genmagic.com/Internet/Trends/index.html
|
In Every homeEfficient business model ...... even if really bad for the customer1985 : yellow/whites pagesthe web ... 10 years later1995 : new model with card reader integrated ...
|
http://www.zakon.org/robert/internet/timeline/
Region and Country | User Count (millions) | Population (millions) | Users (%) |
Africa | 6.3 |
840 |
0.8 |
Middle East / West Asia | 5.1 |
197 |
2.6 |
Asia/Pacific | 187.2 |
3601 |
5.2 |
* Hong Kong | 4.2 |
7 |
60.0 |
* South Korea | 25.9 |
48 |
54.0 |
* Australia | 10.8 |
20 |
54.0 |
* Singapore | 2.0 |
4 |
52.0 |
* Japan | 55.9 |
127 |
44.0 |
* PRC | 46.1 |
1280 |
3.6 |
* India | 7.3 |
1049 |
0.7 |
* remainder | 35.0 |
1066 |
3.3 |
The Americas | 219.2 |
848 |
26.0 |
* USA | 169.3 |
287 |
59.0 |
* Canada | 16.4 |
31 |
53.0 |
* Latin America | 33.5 |
530 |
6.3 |
Europe | 190.9 |
728 |
26.0 |
* Iceland | 0.2 |
0 |
70.0 |
* Sweden | 6.1 |
9 |
68.0 |
* Denmark | 3.2 |
5 |
63.0 |
* The Netherlands | 9.8 |
16 |
61.0 |
* Norway | 2.4 |
4 |
59.0 |
* United Kingdom | 34.2 |
60 |
57.0 |
* Finland | 2.5 |
5 |
52.0 |
* Russia | 17.7 |
143 |
12.4 |
* remainder | 114.8 |
486 |
23.6 |
Thursday, 15 July , 2004, 14:38
Internet and broadband base in the country is still languishing at 0.4 per cent
and 0.02 per cent, minister of state for communications and information technology
Shakeel Ahmad said today. The government is examining recommendations of the
Telecom Regulatory Authority of India (TRAI) to accelerate growth of Internet
and broadband penetration, he told the Rajya Sabha in a written reply.
http://sify.com/finance/fullstory.php?id=13522469
Bits per capita is a relatively new measure of Internet
use. The size of the Internet in a country indicates an element of its progress
towards an information-based economy. International Internet bandwidth provides
a measure of Internet activity because many people share accounts, or use corporate
and academic networks along with cyber cafes and business centers. Outgoing
bandwidth also takes better account of the wide range of possible use, from
those who write a few emails each week, to users who spend many hours a day
on the net browsing, transacting, streaming, and downloading. Because of this,
the often used 'Number of Internet Users' indicator may have less relevance
in the developing world than in other places.
The coloured circle in each country on the map shows, to exact scale, the international
bandwidth in bits per capita (BPC) available in Mid 2002 from publicly accessible
IP networks.
Bandwidth availability in Africa varies tremendously, but is generally very
low compared to developed countries. Although there are few intra-African links,
the marine fibre cables shown are now all operational and should provide faster
and cheaper routes within and out of Africa.
http://web.idrc.ca/en/ev-6568-201-1-DO_TOPIC.html
Broadband penetration, because of high pricing and incomplete availability, continues to be low, with 86% of connections still by modem (but including a few ISDN users). Cable is available only in affluent suburbs in Sydney and Melbourne, plus a proportion of Canberra, and ADSL is only feasible within 3-4 km of a proportion of telephone exchanges. SDSL is only now becoming available. Satellite is even more expensive than the other broadband alternatives. See also Sale (2001).
Moreover, users in many areas where broadband is unavailable or excessively expensive get far less than 56Kbps from their dial-up connections. The Government has been successful in its endeavours to avoid survey information about achieved dial-up speeds becoming publicly available. As late as June 2003, in its response to the Regional Telecommunications (Estens) Inquiry, it made clear that it still regards 19.2Kbps as being acceptable as a target minimum transmission speed for regional and rural Australia, and even for less fortunate urban areas.
http://www.anu.edu.au/people/Roger.Clarke/II/OzI04.html
The Internet is an infrastructure, in the sense in which that term is used to refer to the electricity grid, water reticulation pipework, and the networks of track, macadam and re-fuelling facilities that support rail and road transport. Rather than energy, water, cargo or passengers, the payload carried by the information infrastructure is messages.
The term 'Internet' has come to be used
in a variety of ways. Many authors are careless in their usage of the term,
and considerable confusion can arise. Firstly, from the perspective of the
people who use it, the Internet is a vague, mostly unseen, collection of
resources that enable communications between one's own device and devices
elsewhere. Exhibit 3.2 provides a graphical depiction of that interpretation
of the term 'Internet'.
http://www.anu.edu.au/people/Roger.Clarke/II/OzI04.html
From a technical perspective, the term Internet refers to a particular collection of computer networks which are inter-connected by means of a particular set of protocols usefully called 'the Internet Protocol Suite', but which is frequently referred to using the names of the two central protocols, 'TCP/IP'.
The term 'internet' (with a lower-case 'i') refers to any set of networks interconnected using the Internet Protocol Suite. Many networks exist within companies, and indeed within people's homes, which are internets, and which may or may not have a connection with any other network. The Internet (with an upper-case 'I'), or sometimes 'the open, public Internet', is used to refer to the largest set of networks interconnected using the Internet Protocol Suite.
Additional terms that are in common
use are Intranet, which is correctly used to refer to a
set of networks that are internal to a single organisation, and that are
interconnected using the Internet Protocol Suite (although it is sometimes
used more loosely, to refer to an organisation's internal networks, irrespective
of the protocols used). An Extranet is a set of networks
within a group of partnered organisations, that are interconnected using
the Internet Protocol Suite.
http://www.anu.edu.au/people/Roger.Clarke/II/OzI04.html
A network comprises nodes (computers) and arcs (means whereby messages can be transmitted between the nodes). A network suffers from fragility if individual nodes are dependent on only a very few arcs or a very few other nodes. Networks are more reliable if they involve a large amount of redundancy, that is to say that they comprise many computers performing similar functions, connected by many different paths. The Internet features multiple connections among many nodes. Hence, when (not if) individual elements fail, the Internet's multiply-connected topology has the characteristics of robustness (the ability to continue to function despite adverse events), and resilience (the ability to be recovered quickly and cleanly after failure). The Internet also has the characteristic of scalability, that is to say that it supports the addition of nodes and arcs without interruptions, and thereby can expand rapidly without the serious growing pains that many other topologies and technologies suffer.
The conception of the Internet protocols
took place during the 1960s and 1970s, at the height of the Cold War era.
Military strategists were concerned about the potentially devastating impact
of neutron bomb explosions on electronic componentry, and consequently placed
great stress on robustness and resilience (or, to use terms of that period,
'survivability' and 'fail-soft'). These characteristics were not formal requirements
of the Internet, and the frequently-repeated claims that 'the Internet was
designed to withstand a neutron bomb' are not accurate. On the other hand,
those design characteristics were in the designers' minds at the time.
http://www.anu.edu.au/people/Roger.Clarke/II/OzI04.html
The networks that had been designed to
support voice-conversations provided a dedicated, switched path to the caller
and the callee for the duration of the call, and then released all of the
segments for use by other callers. Data networks were designed to apply a
very different principle. Messages were divided into relatively small blocks
of data, commonly referred to as packets. Packets despatched
by many senders were then interleaved, enabling efficient use of a single
infrastructure by many people at the same time. This is referred to as a packet-switched
network, in comparison with the telephony PSTN, which is a circuit-switched
network. The functioning of a packet-switched network can be explained using
the metaphor of a postal system (Clarke 1998).
http://www.anu.edu.au/people/Roger.Clarke/II/OzI04.html
Layer |
Function |
Orientation |
Examples |
Application |
Delivery of data to an application |
Message |
HTTP (the Web), SMTP (email despatch) |
Transport |
Delivery of data to a node |
Segment |
TCP, UDP |
Network |
Data addressing and transmission |
Datagram |
IP |
Link |
Network access |
Packet |
Ethernet, PPP |
Physical |
Handle Signals on a Medium |
Signals |
CSMA/CD, ADSLco-axial cable, Phone (twisted-pair copper cable), fibre-optic cable, Air |
For devices to communicate successfully over a packet-switched network, it is necessary for them to work to the same rules. A set of rules of this kind is called a protocol. Rather than a single protocol, the workings of packet-switched networks, including the Internet, were conceived as a hierarchy of layers. This has the advantage that different solutions can be substituted for one another at each layer. For example, the underlying transmission medium can be twisted-pair copper cable (which exists in vast quantities because that was the dominant form of wiring for voice services for a century), co-axial cable (which is used for cable-TV and for Ethernet), fibre-optic cable, or a wireless medium using some part of the electromagnetic spectrum. This layering provides enormous flexibility, which has underpinned the rapid changes that have occurred in Internet services.
The deepest layers enable sending devices
to divide large messages into smaller packets, and generate signals on the
transmission medium that represent the content of the packets; and enable
receiving devices to interpret those signals in order to retrieve the contents,
and to re-assemble the original message. The mid-layer protocols provide
a means of getting the messages to the right place, and the upper-layer
protocols use the contents of the messages in order to deliver services.
Exhibit 3.3 provides an overview of the layers as they are currently perceived.
http://www.anu.edu.au/people/Roger.Clarke/II/OzI04.html
The application protocol layer utilises the transmission medium and the lower and middle protocol layers as an infrastructure, in order to deliver services. Some services are provided by computers for other computers, some by computers but for people, and some by people and for people. Key services that are available over the underlying infrastructure include e-mail and the World Wide Web (which together dominate Internet traffic volumes), file transfer and news (also referred to as 'netnews' and by its original name 'Usenet news'). There are, however, several score other services, some of which have great significance to particular kinds of users, or as enablers of better-known services.
During the early years, the services that were available were primarily remote login to distant machines (using rlogin and telnet from 1972), email (from 1972), and file transfer protocol (ftp, from 1973). In 1973, email represented 75% of all ARPANET traffic. By 1975, mailing lists were supported, and by 1979-82 emoticons such as (:-)} were becoming established. By 1980, MUDs and bulletin boards existed. The email service in use in 2004 was standardised as early as 1982. Synchronous multi-person conversations were supported from 1988 by Internet Relay Chat. This was also significant because the innovation was developed in Finland, whereas a very large proportion of the technology had been, and continues to be, developed within the U.S.A.
By 1990, over 100,000 hosts were connected, and innovation in application-layer protocols, and hence in services, accelerated. Between 1990 and 1994, a succession of content-provision, content-discovery and content-access services were released, as existing news and bulletin-board arrangements were reticulated over the Internet, and then enhanced protocols were developed, including archie (an indexing tool for ftp sites developed in Canada), the various 'gopher' systems (generic menu-driven systems for accessing files, supported by the veronica discovery tool), and Brewster Kahle's WAIS content search engines. Between 1991 and 1994, the World Wide Web emerged, from an Englishman and a Frenchman working in Switzerland; and in due course the Web swamped all of the other content-publishing services. By 1995, it was already carrying the largest traffic-volume of any application-layer protocol.
Exhibit 3.4, which is a revised version of an exhibit in Clarke (1994c), provides a classification scheme for the services available over the Internet.
http://www.anu.edu.au/people/Roger.Clarke/II/OzI04.html
192.104.32.19, ... 204.39.45.190, ...194.2.37.1
The Internet Corporation for Assigned Names and Numbers
(ICANN) is an internationally organized, non-profit corporation that has responsibility
for Internet Protocol (IP) address space allocation, protocol identifier assignment,
generic (gTLD) and country code (ccTLD) Top-Level Domain name system management,
and root server system management functions. These services were originally
performed under U.S. Government contract by the Internet Assigned Numbers Authority
(IANA) and other entities. ICANN now performs the IANA function.
As a private-public partnership, ICANN is dedicated to preserving the operational
stability of the Internet; to promoting competition; to achieving broad representation
of global Internet communities; and to developing policy appropriate to its
mission through bottom-up, consensus-based processes.
Class |
No. of Networks can be served |
No. of hosts each network can have |
Use |
---|---|---|---|
A |
27 -2 = 126 |
224 -2 = 16777214 |
Really big organisations |
B |
214 -2 = 16382 |
216 -2 = 65534 |
Big Companies, Universities (ANU : 2 class B) |
C |
221 -2 = 2097150 |
28 -2 = 254 |
School small organizations |
D |
228 -2 = 268435454 |
multicast addresses |
|
E |
227 -2 = 134217726 |
reserved addresses |
Countries Ranked by Population: 2006
--------------------------------------------------------
Rank Country Population
--------------------------------------------------------
1 China 1,313,973,713 : Annual rate of growth (percent)................ 0.6
/ Total fertility rate (per woman)............... 1.7
2 India 1,095,351,995 : Annual rate of growth (percent)................ 1.4
/ Total fertility rate (per woman)............... 2.8
3 United States 298,444,215
4 Indonesia 245,452,739
5 Brazil 188,078,227
Source : http://www.census.gov/cgi-bin/ipc/idbrank.pl and http://www.census.gov/cgi-bin/ipc/idbsum.pl?cty=IN
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
au |
AUSTRALIA |
hk |
HONG KONG |
ca |
CANADA |
at |
AUSTRIA |
jp |
JAPAN |
cn |
CHINA |
be |
BELGIUM |
uk |
UNITED KINGDOM |
fr |
FRANCE |
bj |
BENIN |
us |
UNITED STATES |
de |
GERMANY |
com |
Usually a company or other commercial institution or organization |
edu |
An educational institution (eg ANU) |
gov |
United States Government |
int |
Internationale Organisation (Otan) |
mil |
United States Military |
net |
Network related activities |
org |
intended to serve the noncommercial community, NGO (Unesco) |
These un-sponsored top-level domains are open and unrestricted. Traditionally, however, most names are intended or reserved for specific use, as listed below. Please contact your registrar for more information or visit the Registry websites listed below.
The .aero, .coop, and .museum TLDs are sponsored TLDs and are designed for use within a specified community. Registration restrictions for these TLDs have been developed by the sponsor with input from the community.
Source : http://www.iana.org/gtld/gtld.htm
Domain names can be registered through many different
companies (known as "registrars") that compete with one another.
The registrar you choose will ask you to provide various contact and technical
information that makes up the registration. The registrar will then keep records
of the contact information and submit the technical information to a central
directory known as the "registry." This registry provides other computers
on the Internet the information necessary to send you e-mail or to find your
web site. You will also be required to enter a registration contract with the
registrar, which sets forth the terms under which your registration is accepted
and will be maintained.
1 URI Partitioning
There is some confusion in the web community over the partitioning of URI space,
specifically, the relationship among the concepts of URL, URN, and URI. The
confusion owes to the incompatibility between two different views of URI partitioning,
which we call the "classical" and "contemporary" views.
1.1 Classical View
During the early years of discussion of web identifiers (early to mid 90s),
people assumed that an identifer type would be cast into one of two (or possibly
more) classes. An identifier might specify the location of a resource (a URL)
or its name (a URN) independent of location. Thus a URI was either a URL or
a URN. There was discussion about generalizing this by addition of a discrete
number of additional classes; for example, a URI might point to metadata rather
than the resource itself, in which case the URI would be a URC (citation). URI
space was thus viewed as partitioned into subspaces: URL and URN, and additional
subspaces, to be defined. The only such additional space ever proposed was URC
and there never was any buy-in; so without loss of generality it's reasonable
to say that URI space was thought to be partitioned into two classes: URL and
URN. Thus for example, "http:" was a URL scheme, and "isbn:"
would (someday) be a URN scheme. Any new scheme would be cast into one or the
other of these two classes.
1.2 Contemporary View
Over time, the importance of this additional level of hierarchy seemed to lessen;
the view became that an individual scheme does not need to be cast into one
of a discrete set of URI types such as "URL", "URN", "URC",
etc. Web-identifer schemes are in general URI schemes; a given URI scheme may
define subspaces. Thus "http:" is a URI scheme. "urn:" is
also a URI scheme; it defines subspaces, called "namespaces". For
example, the set of URNs of the form "urn:isbn:n-nn-nnnnnn-n" is a
URN namespace. ("isbn" is an URN namespace identifier. It is not a
"URN scheme" nor a "URI scheme").
Further according to the contemporary view, the term "URL" does not
refer to a formal partition of URI space; rather, URL is a useful but informal
concept: a URL is a type of URI that identifies a resource via a representation
of its primary access mechanism (e.g., its network "location"), rather
than by some other attributes it may have. Thus as we noted, "http:"
is a URI scheme. An http URI is a URL. The phrase "URL scheme" is
now used infrequently, usually to refer to some subclass of URI schemes which
exclude URNs.
1.3 Confusion
The body of documents (RFCs, etc) covering URI architecture, syntax, registration,
etc., spans both the classical and contemporary periods. People who are well-versed
in URI matters tend to use "URL" and "URI" in ways that
seem to be interchangable. Among these experts, this isn't a problem. But among
the Internet community at large, it is. People are not convinced that URI and
URL mean the same thing, in documents where they (apparently) do. When one sees
an RFC that talks about URI schemes (e.g. [RFC 2396]), another that talks about
URL schemes (e.g. [RFC 2717]), and yet another that talks of URN schemes ([RFC
2276]) it is natural to wonder what's the difference, and how they relate to
one another. While RFC 2396 1.2 attempts to address the distinction between
URIs, URLs and URNs, it has not been successful in clearing up the confusion.
http://www.w3.org/TR/uri-clarification/
When you change a URI on your server, you can never completely
tell who will have links to the old URI. They might have made links from regular
web pages. They might have bookmarked your page. They might have scrawled the
URI in the margin of a letter to a friend.
When someone follows a link and it breaks, they generally lose confidence in
the owner of the server. They also are frustrated - emotionally and practically
from accomplishing their goal.
Enough people complain all the time about dangling links that I hope the damage
is obvious. I hope it also obvious that the reputation damage is to the maintainer
of the server whose document vanished.
http://www.w3.org/Provider/Style/URI
ftp | * |
telnet | 23 |
smtp | 25 |
gopher | 70 |
http | 80 |
nntp | 119 |
SSL | 443 |
Source : http://www.pierobon.org/iis/urlparts.htm