#ComputationalComplexity
Explore tagged Tumblr posts
Text
"Introduction to the Theory of Computation" by Michael Sipser is a foundational textbook that explores the mathematical underpinnings of computer science. By reading this book, you will gain a deep understanding of computation, its limits, and its capabilities. Below is a step-by-step breakdown of the outcomes you can expect from studying this book:
#TheoryOfComputation#ComputationTheory#ComputerScience#Algorithms#FormalLanguages#AutomataTheory#CSBooks#TechBooks#ComputationalTheory#TuringMachines#ComputationalComplexity#DataStructures#TechEducation#SoftwareEngineering#MathematicalLogic#TheoryOfComputationBooks#TechTutorial#ComputerScienceTheory#Programming#CSTheory#ComplexityTheory#MachineLearning#DiscreteMathematics#FormalMethods
0 notes
Text
A tweet
A comparison between #deeplearning and brain function #computationalcomplexity #dendriticspikes #highperformancecomputing https://t.co/7ADrgtdP6P
— Despoina Magka (@MarlaMagka) September 7, 2018
0 notes
Text
I/O Complexity
Today I came across a very interesting post on I/O Complexity, an additional way of characterizing the complexity of a computation. The following abstract, to tease you out, is from Robert Harper's Existential Type blog (his full article is here). Relevant is the slideshow on Cache- and IO-Efficient Functional Algorithms.
This summer Guy Blelloch and I began thinking about other characterizations of the complexity of programs besides the familiar abstractions of execution time and space requirements of a computation. One important measure, introduced by Jeff Vitter, is called I/O Complexity. It measures the efficiency of algorithms with respect to memory traffic, a very significant determiner of performance of programs. The model is sufficiently abstract as to encompass several different interpretations of I/O complexity. Basically, the model assumes an unbounded main memory in which all data is ultimately stored, and considers a cache of blocked into chunks of size that provides quick access to main memory. The complexity of algorithms is analyzed in terms of these parameters, under the assumption that in-cache accesses are cost-free, so that the only significant costs are those incurred by loading and flushing the cache. You may interpret the abstract concepts of main memory and cache in the standard way as a two-level hierarchy representing, say, on- and off-chip memory access, or instead as representing a disk (or other storage medium) loaded into memory for processing. The point is that the relative costs of processing cached versus uncached data is huge, and worth considering as a measure of the efficiency of an algorithm.
0 notes