algorithm

(redirected from algorithms)
Also found in: Dictionary, Thesaurus, Medical, Financial.
Related to algorithms: Sorting algorithms

algorithm

1. a logical arithmetical or computational procedure that if correctly applied ensures the solution of a problem
2. Logic Maths a recursive procedure whereby an infinite sequence of terms can be generated
Collins Discovery Encyclopedia, 1st edition © HarperCollins Publishers 2005

algorithm

any method, procedure, or set of instructions for carrying out a task by means of a precisely specified series of steps or sequence of actions, e.g. as in long division, the hierarchical sequence of steps in a typical computer program, or the steps in a manufacturing process.
Collins Dictionary of Sociology, 3rd ed. © HarperCollins Publishers 2000

algorithm

[′al·gə‚rith·əm]
(mathematics)
A set of well-defined rules for the solution of a problem in a finite number of steps.
McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.

Algorithm

A well-defined procedure to solve a problem. The study of algorithms is a fundamental area of computer science. In writing a computer program to solve a problem, a programmer expresses in a computer language an algorithm that solves the problem, thereby turning the algorithm into a computer program. See Computer programming

Operation

An algorithm generally takes some input, carries out a number of effective steps in a finite amount of time, and produces some output. An effective step is an operation so basic that it is possible, at least in principle, to carry it out using pen and paper. In computer science theory, a step is considered effective if it is feasible on a Turing machine or any of its equivalents. A Turing machine is a mathematical model of a computer used in an area of study known as computability, which deals with such questions as what tasks can be algorithmically carried out and what cannot. See Automata theory

Many computer programs deal with a substantial amount of data. In such applications, it is important to organize data in appropriate structures to make it easier or faster to process the data. In computer programming, the development of an algorithm and the choice of appropriate data structures are closely intertwined, and a decision regarding one often depends on knowledge of the other. Thus, the study of data structures in computer science usually goes hand in hand with the study of related algorithms. Commonly used elementary data structures include records, arrays, linked lists, stacks, queues, trees, and graphs.

Applications

Many algorithms are useful in a broad spectrum of computer applications. These elementary algorithms are widely studied and considered an essential component of computer science. They include algorithms for sorting, searching, text processing, solving graph problems, solving basic geometric problems, displaying graphics, and performing common mathematical calculations.

Sorting arranges data objects in a specific order, for example, in numerically ascending or descending orders. Internal sorting arranges data stored internally in the memory of a computer. Simple algorithms for sorting by selection, by exchange, or by insertion are easy to understand and straightforward to code. However, when the number of objects to be sorted is large, the simple algorithms are usually too slow, and a more sophisticated algorithm, such as heap sort or quick sort, can be used to attain acceptable performance. External sorting arranges stored data records.

Searching looks for a desired data object in a collection of data objects. Elementary searching algorithms include linear search and binary search. Linear search examines a sequence of data objects one by one. Binary search adopts a more sophisticated strategy and is faster than linear search when searching a large array. A collection of data objects that are to be frequently searched can also be stored as a tree. If such a tree is appropriately structured, searching the tree will be quite efficient.

A text string is a sequence of characters. Efficient algorithms for manipulating text strings, such as algorithms to organize text data into lines and paragraphs and to search for occurrences of a given pattern in a document, are essential in a word processing system. A source program in a high-level programming language is a text string, and text processing is a necessary task of a compiler. A compiler needs to use efficient algorithms for lexical analysis (grouping individual characters into meaningful words or symbols) and parsing (recognizing the syntactical structure of a source program). See Software engineering

A graph is useful for modeling a group of interconnected objects, such as a set of locations connected by routes for transportation. Graph algorithms are useful for solving those problems that deal with objects and their connections—for example, determining whether all of the locations are connected, visiting all of the locations that can be reached from a given location, or finding the shortest path from one location to another.

Mathematical algorithms are of wide application in science and engineering. Basic algorithms for mathematical computation include those for generating random numbers, performing operations on matrices, solving simultaneous equations, and numerical integration. Modern programming languages usually provide predefined functions for many common computations, such as random number generation, logarithm, exponentiation, and trigonometric functions.

In many applications, a computer program needs to adapt to changes in its environment and continue to perform well. An approach to make a computer program adaptive is to use a self-organizing data structure, such as one that is reorganized regularly so that those components most likely to be accessed are placed where they can be most efficiently accessed. A self-modifying algorithm that adapts itself is also conceivable. For developing adaptive computer programs, biological evolution has been a source of ideas and has inspired evolutionary computation methods such as genetic algorithms. See Genetic algorithms

Certain applications require a tremendous amount of computation to be performed in a timely fashion. An approach to save time is to develop a parallel algorithm that solves a given problem by using a number of processors simultaneously. The basic idea is to divide the given problem into subproblems and use each processor to solve a subproblem. The processors usually need to communicate among themselves so that they may cooperate. The processors may share memory, through which they can communicate, or they may be connected by communication links into some type of network such as a hypercube. See Concurrent processing, Multiprocessing, Supercomputer

McGraw-Hill Concise Encyclopedia of Engineering. © 2002 by The McGraw-Hill Companies, Inc.

algorithm

(algorithm, programming)
A detailed sequence of actions to perform to accomplish some task. Named after the Iranian mathematician, Mohammed Al-Khawarizmi.

Technically, an algorithm must reach a result after a finite number of steps, thus ruling out brute force search methods for certain problems, though some might claim that brute force search was also a valid (generic) algorithm. The term is also used loosely for any sequence of actions (which may or may not terminate).

Paul E. Black's Dictionary of Algorithms, Data Structures, and Problems.
This article is provided by FOLDOC - Free Online Dictionary of Computing (foldoc.org)

algorithm

A set of ordered steps for solving a problem, such as a mathematical formula or the instructions in a program. The terms algorithm and "program logic" are synonymous as both refer to a sequence of steps to solve a problem. However, an algorithm often implies a more complex problem rather than the input-process-output logic of typical business software. See encryption algorithm.
Copyright © 1981-2025 by The Computer Language Company Inc. All Rights reserved. THIS DEFINITION IS FOR PERSONAL USE ONLY. All other reproduction is strictly prohibited without permission from the publisher.
The following article is from The Great Soviet Encyclopedia (1979). It might be outdated or ideologically biased.

Algorithm

 

one of the basic concepts (categories) of mathematics devoid of a formal definition in terms of simpler concepts and abstracted directly from experience. Examples of algorithms are the familiar rules of addition, subtraction, multiplication, and long division taught in elementary school. In general, the term “algorithm” denotes any precise procedure specifying a calculational process (called algorithmic in this case) that begins with an arbitrary initial datum (drawn from a certain set of initial data possible for the given algorithm) and that is directed toward a result fully determined by the initial datum. For example, in the case of the algorithms of arithmetical operations mentioned above, the possible results may be natural numbers expressed in the decimal system, while the possible initial data may consist of ordered pairs of natural numbers. Thus, besides directions for carrying out the algorithmic process, the procedure must include (1) an indication of the set of possible initial data and (2) a rule by which the process is recognized as completed, when the desired result is attained. It is not assumed that the result must be achieved: the process of applying the algorithm to a specific possible initial datum (that is, an algorithmic process that develops from this datum onward) may also be terminated without a result or not terminated at all. If the process terminates (or does not terminate) by achieving a result, the algorithm is said to be applicable (or inapplicable) to the possible initial datum under consideration. It is possible to construct an algorithm II for which there exists no algorithm that discerns, on the basis of the possible initial datum for II, whether II is applicable to it or not; for instance, such an algorithm II can be constructed so that the set of positive integers serves as the set of its possible initial data.

The algorithm concept occupies a central position in modern mathematics, especially in computational mathematics. Thus, the problem of a numerical solution to equations of a given type reduces to finding an algorithm that will convert any pair, consisting of an arbitrary equation of the given type and an arbitrary rational number €, into a number (or an n-tuple of numbers) which is less than € and different from the root or roots of the equation. Improvements in computers offer the possibility of realizing increasingly complex algorithms with their use. However, the term “computational process” used in defining the algorithm concept must not be understood in the restricted meaning of digital calculations. Thus, even in school algebra courses one speaks of literal calculations, to say nothing of such nondigital symbols as brackets, equals signs, and the signs of arithmetical operations used in arithmetical calculations. It is possible to go further and consider calculations involving arbitrary symbols and their combinations; such precisely is the broad approach used to describe the algorithm concept. In this sense, one may speak of an algorithm for translation from one language to another, of an algorithm for train dispatching (which transforms information on train movements into orders), and of other such examples involving algorithmic descriptions of control processes. It is precisely for this reason that the algorithm is one of the central concepts of cybernetics. In general, the most varied constructive entities can serve as the initial data and results of algorithms. To take one example, the results of so-called recognition algorithms, are the words “yes” and “no.”

Example of an algorithm. Let the possible initial datum and the possible results consist of all possible finite sequences (including the empty sequence) of the letters a and b—“words in the alphabet {a, b}.” We shall agree to call the transition from word X to word Y “permissible” in the following two cases (P will denote an arbitrary word):(l) X has the form aP and Y has the form Pb and (2) X has the form baP and Y has the form Paba. An instruction is formulated: “starting with an arbitrary word, make permissible transitions until a word of the form aaP is obtained, then stop; the word P is the result.” This instruction forms an algorithm which we denote as I. We take the word babaa as the initial datum and obtain after one transition baaaba, after two, aabaaba. By virtue of the instruction we must stop, since the result is baaba. We take the word baaba as the initial datum and obtain, successively, abaaba, baabab, abababa, bababab, babababa, .... It can be demonstrated that this process will never end—that is, there will never appear a word beginning with aa, and for every word obtained it will always be possible to perform a permissible transition. Let us now take the word abaab as the initial datum. We obtain baabb, abbaba, bbabab. At this point, however, no further permissible transition is possible, yet there is no signal to stop. This is what is called the resultless stop. Thus, I is applicable to the word babaa and inapplicable to the words baaba and abaab.

Significance of algorithms. Algorithms abound in science; the ability to solve a problem “in the general form” always means, essentially, a knowledge of some algorithm. In speaking, for example, of a person’s ability to add numbers, one has in mind not the fact that he can sooner or later find the sum of any two numbers, but rather the fact that he possesses a unified method of addition applicable to any two specific notations of numbers—in other words, an addition algorithm (the familiar rule for addition of numbers by column is such an algorithm). The notion of a problem “in the general form” is explicated with the help of the concept of a mass problem. A mass problem is specified by a series of separate, individual problems and consists in the requirement to find a general method—that is, an algorithm—for their solution. Thus, the problem of the numerical solution of equations of a given type and the problem of automatic translation are mass problems: the individual problems constituting them are, in the first case, problems of the numerical solutions of individual equations of a given type and, in the second, problems of the translation of individual phrases. The role of mass problems determines both the significance and the sphere of application of the algorithm concept. Mass problems are extremely characteristic of and important in mathematics: for example, in algebra mass problems arise in the verification of algebraic equations of various types; in mathematical logic, we find mass problems for recognizing the derivability of propositions from given axioms; and so on. In the case of mathematical logic, the concept of the algorithm is all the more so essential because it is the basis of calculus—the central concept of mathematical logic. This concept serves as a generalization and explication of the intuitive concepts of “derivation” and “proof.” Establishing the unsolvability of some mass problem (say the problem of recognizing the truth or demonstrability of some logicomathematical language)—that is, the absence of a unified algorithm that permits finding solutions to all individual problems of a given set—is an important cognitive act which shows that in the solution of concrete individual problems it is fundamentally necessary to have specific methods for each such problem. The existence of unsolvable mass problems is thus a sign of the inexhaustibility of the cognitive process.

Substantive phenomena which underlay the formation of the algorithm concept long occupied an important position in science. From very ancient times, many problems of mathematics consisted in the search for constructive methods of one kind or another. This search, especially intensified with the advent of convenient symbolism and with the realization that certain sought-for methods cannot, in principle, be found (the problem of squaring the circle and the like), was a powerful factor in the development of scientific knowledge. Realization of the impossibility of solving problems by direct calculation led to the creation, in the 19th century, of the set-theoretic concept. Only after a period of turbulent development of this concept, during which the question of constructive methods in the modern sense of the term did not arise at all, did it become possible, in the middle of the 20th century, to turn once again to questions of constructivity, this time at a new level, enriched by the emergent concept of the algorithm. It was this concept which formed the basis of a special constructive trend in mathematics.

The very word algorithm derives from algorithmic a Latin transliteration of the Arabic name of al-Khwarizmi, a ninth-century mathematician from the district of Khorezm. In medieval Europe, the term algorism (algorithm) was used for the decimal positional number system and the art of calculating in it, since it was through a 12th-century Latin translation of al-Khwarizmi’s treatise that Europeans first became acquainted with the positional system of notation.

Structure of the algorithmic process. The algorithmic process is one of sequential transformation of constructive entities; it proceeds in discrete steps, each one of which consists in the replacement of a given constructive entity with another. Thus, in applying the algorithm I to the word baaba, we get a succession of words baaba, abaaba, baabab, and so forth. If, say, we apply the algorithm of subtraction by column to the pair <307, 49>, the following succession of constructive entities appears:

In this series of sequential constructive objects, each succeeding constructive object is fully determined, within the limits of the given algorithm, by its immediate predecessor. In a stricter approach, it is also assumed that the transition from every constructive object to the one immediately following it is sufficiently “elementary” in the sense that the transformation, in one step, of the preceding constructive object into the following one is of local character. The transformation does not embrace the whole constructive object, but only a portion of it delineated beforehand for the given algorithm, and the transformation itself is determined not by the whole preceding constructive object, but only by this limited portion.

Thus, along with sets of possible initial data and possible results, there exists for any algorithm a set of intermediate results which make up the working medium in which the algorithmic process develops. For I, all three sets coincide, but not for the subtraction-by-column algorithm: the possible initial data are pairs of numbers, the possible results are numbers (all in the decimal system), while intermediate results are complex fractions of the type

where q is the notation of the number in the decimal system, r is a similar notation or empty word, and ρ is the notation of a number in the decimal system with an allowance for dots over certain digits.

The functioning of the algorithm begins with a preparatory step in which the possible initial datum is transformed into the initial member of the sequence of intermediate results; this transformation takes place on the basis of a special “rule of beginning” which forms part of the algorithm under consideration. This rule, for I, consists in the application of an identity transformation and, for the subtraction algorithm, in the replacement of the pair <a, b> with the expression

Then the “rule of direct processing” is applied, which effects the successive transformation of each arising intermediate result into its successor. These transformations continue until a certain test, to which each intermediate result is subjected as it appears, indicates that a given intermediate result is conclusive; this test is applied on the basis of a special “rule of completion.” For example, for I, the rule of completion consists in verifying whether the intermediate result begins with aa. If the rule of completion does not produce the stop signal for any intermediate result, then the rule of direct processing is either applicable to every arising intermediate result and the algorithmic process continues indefinitely or it is inapplicable to a certain intermediate result and the process is terminated without result. Finally, the final result is extracted from the conclusive intermediate result also on the basis of a special rule; for Entity, this extraction consists in discarding the first two a’s and, for the subtraction algorithm, in discarding everything except the bottom line of digits. In many important cases, the rule of beginning and the rule of extraction of result both assign identical transformations and therefore are not formulated separately. Thus, for every algorithm it is possible to isolate seven (not independent!) parameters that characterize it: (1) the set of possible initial data, (2) the set of possible results, (3) the set of intermediate results, (4) the rule of beginning, (5) the rule of direct processing, (6) the rule of completion, and (7) the rule of extraction of result.

“Refinement” of the concept of the algorithm. Further “refinements” of the concept of the algorithm are possible, and these, strictly speaking, lead to a certain narrowing of the concept. Every such refinement consists in a precise description of a certain class for each of the seven parameters mentioned above—a class within which the given parameter can change. The selection of these classes is what distinguishes one refinement from another. In many refinements, all classes except two—the class of sets of intermediate results and the class of rules of direct processing—are chosen individually; that is, all parameters, except the two exceptions mentioned, are rigidly fixed. Since the seven parameters determine a certain algorithm unambiguously, the choice of the seven classes of variation of these parameters determines a certain class of algorithms. However, such a choice is properly referred to as a “refinement” only if we are convinced that for an arbitrary algorithm having permissible (by the given choice) sets of possible initial data and possible results it is possible to designate an equivalent algorithm taken from the class of algorithms defined by the given choice. This conviction is formulated for each refinement as a basic hypothesis, which, at the present level of our ideas on the matter, cannot be the subject of mathematical proof.

The first refinements of the type described were proposed in 1936 by the American mathematician E. L. Post and the English mathematician A. M. Turing. Also well known are the refinements formulated by the Soviet mathematicians A. A. Markov and A. N. Kolmogorov. The latter proposed treating constructive entities as topological complexes of a specific type; this offered the possibility of explicating the property of “localness” of a transformation For each of the proposed refinements, the corresponding main hypothesis is in good agreement with practice. Favoring this hypothesis is the fact that, as can be demonstrated, all proposed refinements are in a certain natural sense equivalent to one another.

As an example (in modernized form), we may take the refinement proposed by Turing. In order to specify a Turing algorithm, we must indicate (1) pairwise nonintersecting alphabets B, D, C, with letter ɑ isolated in D and letters and ω in C and (2) a set of pairs of the form <, ƞT̲q> where p, qC, ξ, η∊∪D, and T is one of the signs −, 0, +, assuming that in this set (called a program) there are no two pairs with identical first members. The parameters of the algorithm are assigned as follows: possible initial data and possible results are words in B; possible intermediate results are words in BDC containing not more than one letter from C. The rule of beginning: the initial word P is translated into the word λαPλ. The rule of completion: the final result is the intermediate result containing ω. The rule of extraction of result: the result is decreed to be the sequence of all those letters of the conclusive intermediate result which follows ω and precedes the first letter not contained in B. The rule of direct processing, which translates A into A’ consists in the following; we adjoin the letter λ to A on the right and on the left; then in the word thus formed, we replace the portion of form ∊ρξ, where ρ∊C, with the word Q by the following rule: in the program, we seek the pair having the first member ρξ; let the second member of this pair be η Tq; if T is −, then Q = q‪∊η; if T is 0. then Q-∊qy; if T is+, then Q=€ηg. The word appearing after this replacement is A’.

V. A. USPENSKII

The Great Soviet Encyclopedia, 3rd Edition (1970-1979). © 2010 The Gale Group, Inc. All rights reserved.
References in periodicals archive ?
An AI algorithm can now predict faces with just 16x16 resolution.
Comment from discussion An AI algorithm can now predict faces with just 16x16 resolution.
Singer wondered what if, instead of taking hundreds or thousands of small steps to reach a solution, an algorithm could take just a few leaps?
Let's say you are in the mood to watch a movie similar to "The Avengers." A traditional recommendation algorithm would sequentially add a single cinematic selection in every step which has similar attributes to those of "The Avengers." In contrast, the new algorithm samples a group of films at random, discarding those that are too dissimilar to "The Avengers." What remains is a batch of movies that are diverse (after all, you do not want 10 Batman movies) but similar to "The Avengers." The algorithm continues to add batches in every step until it has enough movies to recommend.
Ana Fred proposes in [1] a majority voting combination of clustering algorithms. The idea is that a set of partitions are obtained by running multiple clustering algorithms and, then, the outputs are merged into a single result.
Is James able to effectively use algorithms? The evidence suggests not.
Image steganography algorithms are divided into two categories: spatial domain and transform domain based algorithms.
Algorithms aren't new, they can be traced back millennia to Greek mathematicians such as Euclid, the 'father of geometry', who developed an algorithm that became the most efficient method for computing the greatest common divisor (GCD) of two numbers.
In addition to the algorithms mentioned above, there are various algorithms that generate signed-digit representation.

Full browser ?