An Intelligent Theory of Cost for Partial Metric Spaces (Matthews and Bukatin)

Added by Deon Garrett about 5 years ago

Partial metric spaces generalise metric spaces, allowing non-zero self distance. This is needed to model computable partial information, but falls short in an important respect. The present cost of computing information, such as processor time or memory used, is rarely expressible in domain theory, but contemporary theories of algorithms incorporate precise control over cost of computing resources. Complexity theory in Computer Science has dramatically advanced through an intelligent understanding of algorithms over discrete totally de fined data structures such as directed graphs, without using partially de fined information. So we have an unfortunate longstanding separation of partial metric spaces for modelling partially de fined computable information from the complexity theory of algorithms for costing totally de ned computable information. To bridge that separation we seek an intelligent theory of cost for partial metric spaces. As examples we consider the cost of computing a double negation ¬¬p in two-valued propositional logic, the cost of computing negation as failure in logic programming, and a cost model for the hiaton time delay.

paper_48.pdf (267.1 kB)