Difference Between RG-6 and RG-59 Coaxial Cables

All coaxial cables are constructed with a steel, copper, or aluminum conductor core, which is surrounded by a layer of white/black dielectric insulation. This is further covered with a tube-like braid of copper wires, which is wrapped around by a solid polyvinyl chloride insulating cover called a jacket. Some coaxial cables may have a layer of foil between the dielectric and the conducting core. Coaxial cables use the RG system to differentiate between the various kinds of cables. RG stands for an obsolete military term ‘Radio Guide’. The numbers are used to distinguish one cable from the other, but they are assigned randomly and carry no specific meaning.

RG-6 and RG-59 are two of the most common varieties of coaxial cables, i.e., cables that conduct electricity to transmit signals of radio frequencies, computer networks, and cable televisions. You may also find these cables designated as RG-6/U or RG-59/U, but there is no difference. Both types differ in their construction, uses, and range of capabilities. We shall now look at how one can tell the difference between RG-6 and RG-59 coax cables, and identify one from the other.

How to Identify RG-6 and RG-59 Cables

Construction: Ideally, to identify if the cable is RG-59 or RG-6, one only has to look at the jacket/outer covering, where the details of the cable are printed. However, if this printing is not visible, look for the thickness and the flexibility of the cable. Both cables have 75 Ohm resistance. However, the RG-59 cable has a 22 American wire gauge center of multiple strands of wire, while the RG-6 cable has 18 American wire gauge center with a solid copper core. This means that the RG-59 cable is smaller in diameter than the RG-6. Further, RG-6 cables can have additional foil and wire braid shields along with thicker

Different Types of Servers

A server is a device with a particular set of programs or protocols that provide various services. Together, a server and its clients form a client/server network, which provides routing systems and centralized access to information, resources, stored data, etc.
At the most ground level, one can consider it as a technology solution that serves files, data, print, fax resources and multiple computers. The advanced server versions, like Windows Small Business Server 2003 R2 enable the user to handle the accounts and passwords, allow or limit the access to shared resources, automatically support the data and access the business information remotely. For example, a file server is a machine that maintains files and allows clients or users to upload and download files from it. Similarly, a web server hosts websites and allows users to access these websites. Clients mainly include computers, printers, faxes or other devices that can be connected to the server. By using a server, one can securely share files and resources like fax machines and printers. Hence, with a server network, employees can access the Internet or company e-mail simultaneously.
Types of Servers
Server Platform
Server platform is the fundamental hardware or software for a system which acts as an engine that drives the server. It is often used synonymously with an operating system.
Application Server
Also known as a type of middleware, it occupies a substantial amount of computing region between database servers and the end user, and is commonly used to connect the two.
Audio/Video Server
It provides multimedia capabilities to websites by helping the user to broadcast streaming multimedia content.
Chat Server
It serves the users to exchange data in an environment similar to Internet newsgroup which provides real-time discussion capabilities.
Fax Server
It is one of the best options for organizations that seek minimum incoming and outgoing telephone resources, but

Benefits of Intranet to Business

What is Intranet?
Intranet is a private computer network based on the Internet that can be accessed by the employees of an organization. An Intranet provides easy access to internal files and documents to the various employees of the organization, from their individual workstations. Sharing of data, made possible through the Intranet, not only helps in saving time of employees, but also allows employees from various levels to access data. It also contributes to a paperless office.
Benefits of Intranet
Today’s most modern businesses are adopting intranet technology due its competitive advantages in dealing with the corporate information essential for any business.
Intranet is extremely beneficial for communication and collaboration between the employees for successful functioning of any business organization. Intranet provides this to businesses in the form of tools like discussion groups, Intranet forms, and bulletin boards. Intranet tools help in conveying and distributing necessary information or documents among the employees of an organization.
This results in easy communication and sound relationship between the employees and top-level management. Today, many business houses working on projects use intranet tools, discussion forms, chats, emails, electronic bulletin boards, etc. that helps in communication between different departments of an organization.
Time Saving
Every business knows the importance and value of time. Intranet technology allows to distribute valuable information among the employees in a quick and efficient manner. Intranet saves time by interactivity, i.e employees can access information at a relevant time that suit them, rather than sending and waiting for email and email replies.
Intranet technology provides fast information to employees and helps to perform their various tasks with responsibility. An employee can access any data from any database of the organization without wastage of time. Employees working on projects can collaborate easily, ensuring better and faster results.
Reduce Costs
An important benefit of Intranet

Star Topology Advantages and Disadvantages

Did You Know?
In the field of computers, ring, bus, tree, and star are the names assigned to specific network topologies.
Basically, the term ‘network topology’ refers to the layout pattern of interconnections of various elements of a computer network, like the nodes, links, etc. The concept is further categorized into two types: (i) physical topology, which refers to the physical design of a network, and (ii) logical topology, which focuses on how the data is actually transferred within the network.
The physical topology is further classified into six different types; namely, the point-to-point network, ring network, mesh network, bus network, tree network, and the star network. Of these, the star network in particular is considered very popular owing to its numerous advantages. That is not to say it doesn’t have any disadvantages of its own. It is important to go through the advantages and disadvantages of different network topologies―and not just the star topology alone―to determine which of these is suitable for you.
What is Star Topology?
In this type of network topology, all the nodes are connected individually to one common hub. The transmission stations are connected to the central node in such a manner that the design resembles the shape of a star, and hence, the name. The star topology design resembles a bicycle wheel with spokes radiating from the center. In this case, data exchange can only be carried out indirectly via the central node to which all the other nodes are connected.
Advantages and Disadvantages
Like we said earlier, even the star topology has its own positives and negatives which have to be taken into consideration when evaluating the feasibility of the setup. While the isolation of devices happens to be its triumph card―with most of its advantages revolving around this particular

What does ASCII Stand For?

ASCII stands for American Standard Code for Information Interchange. It is a form of character encoding that is based on the English alphabet. ASCII codes represent the text in computers and communication tools that handle text.

ASCII characters were developed from telegraphic codes. Work on developing the ASCII standard began in 1960. The first edition came up only in 1963. The standard underwent updates in 1967 and in 1986. The committee working on the development of the ASCII character set contemplated the use of a shift key functionality, which would allow them to build 6-bit representations of character symbols. By implementing the shift key functionality in their design, they were going to create some character codes that would determine which character code options to follow. However, the shift key function was discarded from the design and eight-bit codes were formulated. This also allowed the ASCII code design to support parity bits. Robert Berner, a computer scientist at IBM, was instrumental in the development of some features that were added to ASCII in its revised versions.

The ASCII character set is a collection of 33 non-printing characters, 94 printable characters and the space character that is not printable. The first 32 ASCII codes are reserved for control characters like the null characters, characters for denoting start and end of text, the line feed character, the shift in and shift out characters as also the characters used for controlling devices. The other ASCII codes are allotted for actual printable character symbols. The ASCII code provides a mapping between digital bit patterns and characters, thus allowing devices to communicate with each other.

History of Macintosh Computers

Apple Inc., a famous name in the computer industry, refers to a company that develops and markets personal computers with the brand name Macintosh. Macintosh is better known as Mac. The Macintosh 128K, released on January 24, 1984, was a commercial success. It was the first personal computer which came with a mouse and a graphical user interface. With the passing years, Apple Inc. evolved and today, it is a business giant in the field of computers.


Jef Raskin, a human computer interface expert from America, was an Apple employee who came up with the idea of building an affordable and easy-to-use computer. In 1979, Raskin started planning for building a team that would bring his idea into reality. He soon formed a team of Bill Atkinson, a Lisa team member, Burrell Smith, a service technician and others. Soon, they started working on Raskin’s idea. The first Macintosh board that their team developed had a 64 KB RAM, used a Motorola microprocessor and featured a black and white bitmap display.

By the end of 1980, Smith, one of the team members of the first Macintosh team, created a board that ran on a higher speed, featured a higher-capacity RAM and supported a wider display. Steve Jobs, impressed by this design, began to take interest in this project. His ideas have highly influenced the design of the final Macintosh. Jobs resigned from Apple in 1985.

The following years witnessed the development of desktop publishing and other applications such as Macromedia FreeHand, Adobe Photoshop, and Adobe Illustrator, which helped in the expansion of the desktop publishing market. It was also during these years that the shortfalls of Mac were exposed to the users. It did not have a hard disk drive and had little memory. In 1986, Apple came

Evolution of Supercomputers

Bigger, faster, stronger and better – man seems to thrive on using superlatives, especially in the realm of technology. With mobiles and computers, the trend seems to be speedier and smaller. Then there are supercomputers, which are the brains or “Einsteins” of the computing world. Supercomputers are faster, more powerful and very large, when compared to their everyday counterparts. Supercomputers have a wide range of uses in complex and data consuming applications. Where did it all began? To answer that question, read on for a brief history of supercomputers.

The Beginning of the Supercomputer Age – The 1960s

Livermore Advanced Research Computer
In 1960, the UNIVAC LARC (Livermore Advanced Research Computer) was unveiled. It cannot be considered as the first supercomputer, since its configuration was not as powerful as expected but rather is considered the first attempt at building such a machine. Its inventor was Remington Rand. At the time of its invention, it was the fastest computer around. Following is a list of features of UNIVAC LARC:

1. Had 2 Central Processing Units (CPU’s) and one I/O (input/output) processor.
2. Had a core memory power of 8 banks, which stored 20,000 words.
3. Accessing memory speed was 8 microseconds and cycle time was 4 microseconds.

1961 saw the creation of the IBM 7030 or Stretch. In the whole rat race to build the first supercomputer and sell it, IBM had designs and plans but lost the first contract to the LARC. Fearing that the LARC would emerge as the ultimate winner, IBM promised a very powerful machine and set high expectations, that ultimately it could not live up to. The 7030 was compared to an earlier IBM model, the IBM 7090, which was a mainframe computer released in 1959. Its computational speed was projected to be 100x

Linux: History and Introduction

Linux history
Linux is one of the popularly used operating systems and a free software supporting open source development. Originally designed for Intel 80386 microprocessors, Linux now runs on a variety of computer architectures and is widely used.

A Brief History

Unix was the third operating system to CTSS, the first one followed by MULTICS. A team of programmers led by Prof. Fernando J. Corbato at the MIT Computation Center, wrote the CTSS, the first operating system supporting the concept of time-sharing. AT&T started working on the MULTICS operating system but had to leave the project as they were failing to meet deadlines. Ken Thompson, Dennis Ritchie, and Brian Kernighan at Bell Labs, used the ideas on the MULTICS project to develop the first version of Unix.

MINIX was a Unix-like system released by Andrew Tenenbaum. The source code was made available to the users but there were restrictions on the modification and distribution of the software. On August 25, 1991, Linus Torvalds, a second year computer engineering student studying in the University of Helsinki made an announcement that he was going to write an operating system. With an intent to replace MINIX, Torvalds started writing the Linux kernel. With this announcement of Torvalds, a success story had begun! Linux was previously dependent on the MINIX user space but with the introduction of the GNU GPL, the GNU developers worked towards the integration of Linux and the GNU components.

An Introduction to the Linux Operating System

The Unix-like operating system that uses the Linux kernel is known as the Linux operating system. In 1991, Linus Torvalds came up with the Linux kernel. He started writing the Linux kernel after which, around 250 programmers contributed to the kernel code. Richard Stallman, an American software developer, who was a part of the

When was the First Computer Made?

It is interesting to know that the term ‘computer’ was assigned to people’s job which consisted of performing calculations on a continuous basis, like navigational tables, tide charts, and planetary positions for astronomical needs.
Human errors, boredom and comparatively slow pace of working, led inventors to mechanize the entire process and eventually come up with machines that had brains – computers! Need for accurate and fast calculations led to the invention of tools like Napier’s bones for logarithms, the slide rule, calculating clock, Pascal’s Pascaline, and power-loom that used punched cards; which can be considered as the forefathers of today’s computers.

Talking about the history and evolution of computers, will be incomplete without the mention of Charles Babbage, who is considered the ‘Father of Computers’. He was making a steam-driven calculating machine, the size of a room, way back in 1822, which he called the ‘Difference Engine’.

The Difference Engine
The Difference Engine project, though heavily funded by the British government, could never see the light of the day. Yet, in pursuit of a better machine for more complex calculations, he came up with the ‘Analytical Engine’ that had parts parallel to the memory card and the central processing unit that our systems have today. Hollerith desk was later invented in the U.S. For the need to record the census in 1880, which used a combination of the earlier calculating tools that were invented.
Z1 Computer
In the 1940s there were attempts to make machines that served the purpose of computing numbers and problems, like the Z1 Computer in 1936.
Z Machines
Then Konrad Zuse also wanted to make something that would be like a computer, hence was created electromechanical “Z machines,” the Z3, in 1941, which was the first working machine, which featured binary arithmetic, including floating point arithmetic and a measure of programmability.

History of Microprocessor

The evolution of the microprocessor has been one of the greatest achievements of our civilization. In some cases, the terms ‘CPU’ and ‘microprocessor’ are used interchangeably to denote the same device. Like every genuine engineering marvel, the microprocessor too has evolved through a series of improvements throughout the 20th century. A brief history of the device along with its functioning is described below.

Working of a Processor

☞ It is the central processing unit, which coordinates all the functions of a computer. It generates timing signals, and sends and receives data to and from every peripheral used inside or outside the computer.

☞ The commands required to do this are fed into the device in the form of current variations, which are converted into meaningful instructions by the use of a Boolean Logic System.
☞ It divides its functions in two categories, logical and processing.

☞ The arithmetic and logical unit and the control unit handle these functions respectively. The information is communicated through a bunch of wires called buses.

☞ The address bus carries the ‘address’ of the location with which communication is desired, while the data bus carries the data that is being exchanged.

Types of Microprocessors
◆ CISC (Complex Instruction Set Computers)
◆ RISC(Reduced Instruction Set Computers)
◆ VLIW(Very Long Instruction Word Computers)
◆ Super scalar processors
Types of Specialized Processors
◆ General Purpose Processor (GPP)
◆ Special Purpose Processor (SPP)
◆ Application-Specific Integrated Circuit (ASIC)
◆ Digital Signal Processor (DSP)
History and Evolution
The First Stage
The invention of the transistor in 1947 was a significant development in the world of technology. It could perform the function of a large component used in a computer in the early years. Shockley, Brattain, and Bardeen are credited with this invention and were awarded the Nobel prize for the same.Soon, it was found that the function this large