“The most successful man is the one who has the best information.” This remark summarizes the business of information technologies—the production, processing, storing, communication, and use of information.
Information technologies have resulted in the development of one of the world’s largest industries. Today, cutting-edge technologies such as computers, software and artificial intelligence, fiber optics, networks, and standards have an immense impact on information technologies. Among the many applications of information technologies, three of particular importance are traditional telephony, mobile cellular telephony, and data processing and communication. Information technologies, in turn, affect many industries and society as a whole.
The revolution in the software industry is still to come. We often think of the “software crisis” as a problem that can be solved only through the efforts of millions of programmers. But developments in hardware offer an opportunity to solve the software problem by combining good tools with engineering skills.
The overall development of software technology can be described as a series of discontinuities. The technology of computer languages has progressed from the machine level, where programs are written more or less as ones and zeros, to the assembler level, which affords the programmer a somewhat friendlier notation but still requires the programmer to describe the logic of the program in the same minute detail. Next are the so-called high-level languages, such as Fortran, COBOL, Pascal, and Ada, which imitate the languages of mathematics, accounting, or whatever the application area is and allow the programmer a higher level of expression. However, these languages still require the programmer to explain exactly how the computer is going to solve a problem, and therefore, the order of the lines.
of code is very important. Finally, there are very high level or application-oriented languages, such as LISP, Prolog, and other so-called fourth-generation languages (4GL), in which the programmer declares what the computer should do, not how it should do it.
All the “how” languages, up to and including Ada, are called procedural, or imperative, languages. The very high level languages are declarative, or applicative, languages. They differ from the “how” languages in clarity, suitability for parallel execution, computing power requirements, and applications.
Declarative languages are usually more concise and clearer than procedural languages. They are also intrinsically suited to parallel execution, whereas procedural languages can exploit parallelism in a problem only with great difficulty. A drawback of declarative languages, until now, has been their need for substantial computing power, or preferably a new computer architecture. Procedural languages, on the other hand, are quite efficient in traditional “von Neumann” computers.
Partly because they need so many MIPS, declarative languages have found relatively few applications in industry until now. But there are reasons to believe that this is about to change—that the procedural languages are like the dinosaurs, growing larger and larger toward their extinction, and that the present declarative languages are like the first mammals, still small and hiding in the bushes, but poised ready to take over the world.
Language is only one factor that influences software efficiency and quality. The methods and tools used to support software development and handling are as important as the structure of the hardware and software.
In the telecommunications industry, large real-time systems with software written in millions of lines of code are needed to support our public switching system (AXE). These data bases sum up to more than 400 gigabytes. Another way to think of the size and complexity of this system is to consider that we have installed more than 10 million telephone lines in more than 50 countries.
The software content of a single AXE installation is on the order of 2–5 megabytes, and there are numerous versions to meet different market requirements. This calls for very good tools for releasing different versions and updates. As a result, it is absolutely vital to use results from information technologies research. With large systems, use of these results, in turn, requires extremely good software management and planning, as well as new ways of structuring systems. Reusable software and different kinds of software tools for different parts of a computerized system are needed. Today new technologies are continually being introduced; for example, artificial intelligence technology could be valuable for creating a good human-machine interface to a “conventional” computer system.
Artificial intelligence combines such mechanistic concepts as repetition, precision, and data handling, and then uses this combination in the broader applications of expert systems and knowledge engineering. An expert system implies a combination of a knowledge base and data linked to a general problem solver. This is in sharp contrast to conventional programming where the data are processed by “hardwire application knowledge” programs, that is, where the algorithms are processed in specially designed hardware.
Prototyping and new languages and means for specification make the early phases of development more specific; this process is important because a major task in software development is the fundamental system design. But fourth-generation languages are not always the only solution. As cutting-edge technologies advance, it is becoming more and more important to begin developing standards, formal or de facto, such as those put forth by standards-setting organizations and in operating systems of large manufacturers. Adhering to standards allows an organization to concentrate resources in areas where it can add substantial value.
The next crisis in computing will be the need to handle the rapidly growing amount of information that will be available in distributed data bases. This poses many challenges for research. For example, we need new ways of describing data and classifying relationships between data and finding and retrieving data already stored in data bases.