This link has been bookmarked by 78 people . It was first bookmarked on 04 Apr 2007, by ashishgup.
-
19 Feb 15
-
02 Feb 15
-
Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem
-
best algorithm
-
Cobham's thesis says that a problem can be solved with a feasible amount of resources if it admits a polynomial time algorithm.
-
Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turing machines and alternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others.
-
Average-case complexity: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a probability distribution over the inputs. For instance, if all inputs of the same size are assumed to be equally likely to appear, the average case complexity can be defined with respect to the uniform distribution over all inputs of size n.
-
However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound of T(n) for a problem requires showing that no algorithm can have time complexity lower than T(n).
-
The set of decision problems solvable by a deterministic Turing machine within time f(n)
-
-
31 Oct 14
-
One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.
-
-
05 Oct 14
-
31 Mar 14
-
08 Dec 13
-
In computational complexity theory, a problem refers to the abstract question to be solved.
-
-
14 Oct 13
-
23 Sep 13
-
-
A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm.
-
Stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input.
-
complexity theory addresses computational problems and not particular problem instances.
-
They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others.
-
However, some computational problems are easier to analyze in terms of more unusual resources. For example, a nondeterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The nondeterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that nondeterministic time is a very important resource in analyzing computational problems.
-
The complexity of an algorithm is often expressed using big O notation.
-
A complexity class is a set of problems of related complexity.
-
-
24 Apr 13
-
P versus NP problem
-
The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution
-
The P versus NP problem is one of the Millennium Prize Problems proposed by the Clay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem.
-
-
11 Oct 12
-
26 Sep 12
-
13 Jun 12
-
a computational problem consists of problem instances and solutions to these problem instances
-
Decision problems are one of the central objects of study in computational complexity theory
-
A Turing machine is a mathematical model of a general computing machine.
-
A probabilistic Turing machine
-
A deterministic Turing machine is the most basic Turing machine
-
which uses a fixed set of rules to determine its future actions.
-
is a deterministic Turing machine with an extra supply of random bits.
-
Algorithms that use random bits are called randomized algorithms
-
The time required by a deterministic Turing machine M on input x is the total number of state transitions,
-
Upper and lower bounds on the complexity of problems
-
To show an upper bound T(n) on the time complexity of a
-
analysis of algorithms
-
problem, one needs to show only that there is a particular algorithm with running time at most T(n)
-
To show a lower bound of T(n) for a problem requires showing that no algorithm can have
-
time complexity lower than T(n)
-
big O notation
-
complexity classes can be defined based on function
-
"polynomial time", "logarithmic
-
space", "constant depth", etc.
-
Many important complexity classes can be defined by bounding the time or space used by the algorithm.
-
important complexity classes of decision problems defined in this manner are the following:
-
The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that ad
-
efficient algorithm
-
-
28 Mar 12
-
23 Mar 12
-
07 Dec 11
-
06 Oct 11
-
Thus no problem in C is harder than X, since an algorithm for X allows us to solve any problem in C.
-
A problem X is hard for a class of problems C if every problem in C can be reduced to X.
-
If a problem X is in C and hard for C, then X is said to be complete for C. This means that X is the hardest problem in C. (Since many problems could be equally hard, one might say that X is one of the hardest problems in C.)
-
Thus the class of NP-complete problems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P
-
Because the problem P = NP is not solved, being able to reduce a known NP-complete problem, Π2, to another problem, Π1, would indicate that there is no known polynomial-time solution for Π1. This is because a polynomial-time solution to Π1 would yield a polynomial-time solution to Π2. Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP
-
-
18 Aug 11
-
One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem.
-
The time required by a deterministic Turing machine M on input x is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no").
-
-
04 Jun 11
-
03 Jun 11
-
10 Dec 10
-
Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty.
-
-
15 Nov 10
-
18 Oct 10ronald fuller
Computational complexity theory is a branch of the theory of computation in computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty. In this context, a computational problem is understood to be a task that is in principle amenable to being solved by a computer. Informally, a computational problem consists of problem instances and solutions to these problem instances. For example, primality testing is the problem of determining whether a given number is prime or not. The instances of this problem are natural numbers, and the solution to an instance is yes or no based on whether the number is prime or not.
-
13 Oct 10
-
Computational complexity theory is a branch of the theory of computation in computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty.
-
Informally, a computational problem consists of problem instances and solutions to these problem instances. For example, primality testing is the problem of determining whether a given number is prime or not. The instances of this problem are natural numbers, and the solution to an instance is yes or no based on whether the number is prime or not.
-
A problem is regarded as inherently difficult if solving the problem requires a large amount of resources, whatever the algorithm used for solving it. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage.
-
Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). In particular, computational complexity theory determines the practical limits on what computers can and cannot do.
-
Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between computational complexity theory and analysis of algorithms is that the latter is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the former asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, it tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can be solved in principle algorithmically.
-
A computational problem can be viewed as an infinite collection of instances together with a solution for every instance.
-
For example, consider the problem of primality testing. The instance is a number and the solution is "yes" if the number is prime and "no" otherwise. Alternately, the instance is a particular input to the problem, and the solution is the output corresponding to the given input.
-
When considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings are bitstrings. As in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices, or via encoding their adjacency lists in binary.
-
Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently.
-
Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0
-
A decision problem can be viewed as a formal language, where the members of the language are instances whose answer is yes, and the non-members are those instances whose output is no.
-
The objective is to decide, with the aid of an algorithm, whether a given input string is member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input.
-
An example of a decision problem is the following. The input is an arbitrary graph. The problem consists in deciding whether the given graph is connected, or not. The formal language associated with this decision problem is then the set of all connected graphs—of course, to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings.
-
To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as function of the size of the instance. This is usually taken to be the size of the input in bits.
-
Complexity theory is interested in how algorithms scale with an increase in the input size. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with 2n vertices compared to the time taken for a graph with n vertices?
-
Since the time taken on different inputs of the same size can be different, the worst-case time complexity T(n) is defined to be the maximum time taken over all inputs of size n. If T(n) is a polynomial in n, then the algorithm is said to be a polynomial time algorithm. Cobham's thesis says that a problem can be solved with a feasible amount of resources if it admits a polynomial time algorithm.
-
It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of the Church–Turing thesis.
-
Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a RAM machine, Conway's Game of Life, cellular automata or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory.
-
Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turing machines and alternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others.
-
A decision problem A can be solved in time f(n) if there exists a Turing machine operating in time f(n) that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time f(n) on a deterministic Turing machine is then denoted by DTIME(f(n)).
-
A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at least as difficult as another problem. For instance, if a problem X can be solved using an algorithm for Y, X is no more difficult than Y, and we say that X reduces to Y. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reductions or log-space reductions.
-
The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication.
-
A problem X is hard for a class of problems C if every problem in C can be reduced to X. Thus no problem in C is harder than X, since an algorithm for X allows us to solve any problem in C. Of course, the notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of NP-hard problems.
-
-
26 Sep 10
-
09 Aug 10
-
22 Jul 10
-
30 Jun 10
-
29 May 10John Rodrigues
"Computational complexity theory is a branch of the theory of computation in computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty. In this context, a computational problem is understood
-
21 May 09
-
05 May 09
-
09 Nov 08Carlos Pereira
Computational complexity theory, as a branch of the theory of computation in computer science, investigates the problems related to the amounts of resources required for the execution of algorithms (e.g., execution time), and the inherent difficulty in pr
-
18 Jun 08
-
20 Mar 08
-
12 Jan 08
-
16 May 07
-
14 Aug 06
Would you like to comment?
Join Diigo for a free account, or sign in if you are already a member.