%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/90-12.ps.Z %A Rapaport, W. %T Cognitive Science (Superseded by TR 96-19) %R 90-12 %D May 1990 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/90-24.ps.Z %A He, X. %T On Finding the Rectangular Duals of Planar Triangulated Graphs %R 90-24 %D September 1990 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-04.ps.Z %A Homer, S. %A Selman, A. %T Complexity Theory %R 91-04 %D June 8, 1992 %I Department of Computer Science, SUNY Buffalo %X This is an introductory survey of complexity theory. Topics covered include Modes of Computation, Complexity Classes and Complexity Measures, Basic Results, Nondeterminism and NP-Completeness, Relative Computability, Parallelism, Probabilistic Complexity Classes, and Structural Complexity Theory. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-05.ps.Z %A He, X. %T Efficient Parallel Algorithms for Two Graph Layout Problems %R 91-05 %D June 1991 %I Department of Computer Science, SUNY Buffalo %X We present efficient parallel algorithms for solving two graph layout problems: Find a F$\acute{a}$ry Embedding on a grid and construct a rectangular dual for planar graphs. The algorithm for the first problem takes $O(\log\em{n}\log^*\em{n})$ time with $O(n)$ processors on a PRAM. The algorithm for the second problem takes $O(\log^2\em{n})$ time with $O(n)$ processors. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-09.ps.Z %A Haas, J. %A Jayaraman, B. %T Automatic Synthesis of Semantics for Context-free Grammars %R 91-09 %D July 1991 %I Department of Computer Science, SUNY Buffalo %X We are investigating the mechanical transformation of an unambiguous context-free grammar (CFG) into a definite-clause grammar (DCG) using a finite set of examples, each of which is a pair $\langle s,~m\rangle$, where $s$ is a sentence belonging to the language defined by the CFG and $m$ is a semantic representation (meaning) of $s$. The resulting DCG would be such that it can be executed (by the interpreter of a logic programming language) to compute the semantics for every sentence of the original DCG. Three important assumptions underlie our approach: (i) the semantic representation language is the {\it simply typed $\lambda$-calculus}; (ii) the semantic representation of a sentence can be obtained from the semantic representations of its parts ({\it compositionality}); and (iii) the structure of the semantic representation determines its meaning ({\it intensionality}). The basic technique involves an enumeration of parse trees for sentences of increasing size; and, for each parse tree, a set of equations over (typed) function variables that represent the meanings of the constituent subtrees is formulated and solved by means of a higher-order unification procedure. The solutions for these function variables serve to augment the original grammar in order to derive the final DCG. A technique called {\it partial execution} is used to convert, where possible, the generated higher-order DCG into a first-order DCG, to facilitate efficient bidirectional execution. In the appendix, we provide detailed illustration of the use of such a system for storing and retrieving information contained in natural language sentences. Based on our experimentation, we conclude that an improved version of this system will facilitate rapid prototyping of natural language front-ends for various applications. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-11.ps.Z %A Jayaraman, B. %T The SuRE Programming Framework %R 91-11 %D August 1991 %I Department of Computer Science, SUNY Buffalo %X The goal of this work is the design of logic programming constructs that will help minimize the dependence on extralogical features. Towards this end, we explore in this paper three forms of logical assertions---equational, relational, and subset assertions---accompanied by corresponding matching and unification operations. The uses of equations and relations are well-known, and hence we concentrate on the uses of the subset assertion. In an earlier paper, we introduced three forms of this assertion: $f(terms) \supseteq expr$, $f(terms) \supseteq set$, and $f(terms) \supseteq set$ {\tt :-} {\it condition}. In this paper, we show how subset assertions can be used to re-formulate a relation as a set-valued function, thereby declaratively specifying which arguments of the relation are the input arguments and thus obviating {\it mode} declarations. We also present two new ways of invoking a function defined by subset assertions: using an element goal, $f(terms) \ni x$, to incrementally generate elements; and using a superset goal, $f(terms) \supseteq s$, to lazily generate the entire set. We show how lazy generation of the set allows the search space to be pruned declaratively, obviating some uses of the {\it cut}, while eager generation of the set corresponds to the {\it setof} construct of Prolog. Subset assertions can also help efficiently formulate transitive-closure operations, which are traditionally expressed using {\it assert} and {\it retract}. In this paper, we also consider the respective duals of the foregoing three forms of subset assertions: $f(terms) \subseteq expr$, $f(terms) \subseteq set$, and $f(terms) \subseteq set$ {\tt :-} {\it condition}. We show that these new logical forms also have very natural uses, especially in flow analysis problems. The programming framework embodying all these features is called {\bf SuRE}---for {\bf Su}bsets, {\bf R}elations, and {\bf E}quations---and is being implemented with a view to answer the question: can logic programming be declarative and practical? %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-12.ps.Z %A Selman, A. %T A Taxonomy of Complexity Class of Functions %R 91-12 %D June 1992 %I Department of Computer Science, SUNY Buffalo %X This paper comprises a systematic comparison of several complexity classes of functions that are computed nondeterministically in polynomial time or with an oracle in NP. There are three components to this work. - A taxonomy is presented that demonstrates all known inclusion relations of these classes. For (nearly) each inclusion that is not shown to hold, evidence is presented to indicate that the inclusion is false. As an example, consider FewPF, the class of multivalued functions that are nondeterministically computable in polynomial time such that for each {\em x}, there is a polynomial bound on the number of distinct output values of $f(x)$. We show that ${\rm FewPF} \subseteq {\rm PF}^{{\rm NP}}_{tt}$. However, we show ${\rm PF}^{{\rm NP}}_{tt} \subseteq $ FewPF if and only if NP = co-NP, and thus ${\rm PF}^{{\rm NP}}_{tt} \subseteq {\rm FewPF}$ is likely to be false. - Whereas it is known that ${\rm P}^{{\rm NP}}(O(\log n)) = {\rm P}^{{\rm NP}}_{tt} \subseteq{\rm P}^{{\rm NP}}$ [Hem87, Wagb, BH88], we show that ${\rm PF}^{{\rm NP}}(O(\log n)) ={\rm PF}^{{\rm NP}}_{tt}$ implies P = FewP and R = NP. Also, we show that ${\rm PF}^{{\rm NP}} _{tt} = {\rm PF}^{{\rm NP}}$ if and only if ${\rm P}^{{\rm NP}}_{tt} ={\rm P}^{{\rm NP}}$. - We show that if every nondeterministic polynomial-time multivalued function has a single-valued nondeterministic refinement (equivalently, if every honest function that is computable in polynomial-time can be inverted by a single-valued nondeterministic function), then there exists a disjoint pair of NP-complete sets such that every separator is NP-hard. The latter is a previously studied open problem that is closely related to investigations on promise problems. This results motivates a study of reduction between partial multivalued functions. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-13.ps.Z %A Shapiro, S. %A Chalupsky, H. %A Chou, H. %T Connecting ARC/INFO and SNACTor Project Report %R 91-13 %D June 1992 %I Department of Computer Science, SUNY Buffalo %X This report describes an interface between ARC/INFO, a geographic information management system, and SNACTor, the SNePS acting component. It also shows a small example interaction that demonstrates how ARC/INFO and SNACTor can interact using the new interface, and how more sophisticated future applications can make use of its functionality. The interface was designed and implemented during a two-month research project carried out in Summer, 1990 at the State University of new York at Buffalo. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-14.ps.Z %A Shende, A. %T Digital Analog Simulation of Uniform Motion in Representations of Physcial N-Space by Lattice-work MIMD Computer Architectures %R 91-14 %D April 1991 %I Department of Computer Science, SUNY Buffalo %X This doctoral dissertation is part of an ongoing research project with John Case, Dayanand S. Rajan and myself. We are investigating the possibility of solving problems in scientific computing involving the motion of objects in a bounded region of physical {\em n}-space by (a) representing points in the region of space by processors in a lattice-work mesh of processors with local connections for inter-processor communication, and (b){\em literally, analogically} simulating the motion of objects by representing the particles of these objects by algorithms which move themselves about in the lattice-work processors, much as the motion in real space of the particles making up real objects, in effect, constitutes the motion of those objects. The main contributions of this dissertation are (i) two provably correct algorithms to generate {\em virtually perfectly shaped} spherical wavefronts emanating from a point source at virtually constant radial speed, (ii) a provably correct algorithm template for simulating the unform-speed traversal of certain curves by a single particle, and (iii) for the two algorithms of (i) and the template of (ii), the specification of classes of meshes which are excellent discrete approximations to bounded regions of euclidean {\em n}-space, and on which these algorithms can be executed. Algorithms for the simulation of uniform-speed translation and uniform angular speed revolution of single particles are presented as specific instances of the algorithm template. A {\em crucial} aspect of {\em all} the algorithms in this dissertation is that, in spite of the discreteness of the representation of physical {\em n}-space, the {\em speed} of the simulations, using these algorithms, is approximately {\em linearly proportional} to the speed of the phenomenon being simulated. Furthermore, all decisions made by processors running these algorithms are based only on {\em local} information, such as messages from neighboring processors. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-16.ps.Z %A He, X. %T Parallel Algorithm for Cograph Recognition with Applications (Revised) %R 91-16 %D June 1991 %I Department of Computer Science, SUNY Buffalo %X We present a parallel algorithm for recognizing cographs and constructing their cotrees. The algorithm takes $O(\log^2 n)$ time with $O(n+m)$ processors on a CRCW PRAM, where $n$ and $m$ are the number of vertices and edges of the graph. Using cotree representation, we obtain parallel algorithms for solving the maximum matching and the permutation representation problems for cographs using $O(\log n)$ time with $O(n)$ processors. We also obtain a parallel algorithm for the depth-first spanning tree problem for permutation graphs (a class properly contains cographs) which takes $O(\log^2 n)$ time with $O(n)$ processors. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-18.ps.Z %A Regan, K. %A Schwentick, T. %T On the Power of One Bit of a Number-P Function %R 91-18 %D June 5, 1992 %I Department of Computer Science, SUNY Buffalo %X We introduce the class $MP$ of languages $L$ which can be solved in polynomial time with an oracle for one selected bit of the value $f(y)$ of a $\# P$ function $f$ on a selected argument $y$. This extends the much-studied language classes $\oplus P$ and $PP$, which correspond to the power of the least and most significant bits, respectively. We show that $MP$ is captured by the power of the middle bit; namely: a language $L$ is in $MP$ iff for some $\# P$ function $f'$ and all $x$, $x \in L \iff$ the middle bit of $f'(x)$ in binary notation is a `1'. Also, S.~Toda's proof [Toda89,91] that the polynomial hierarchy ($PH$) is contained in $P^{\# P}$ actually gives: $PH \subseteq BP[\oplus P] \subseteq C \oplus P \subseteq MP$. The class $MP$ has complete problems, and is closed under complements and under polynomial-time many-one reducibility. We show that $MP$ is closed under intersection iff, for any fixed $k > 0$, $k$ bits of a $\# P$ function are no more powerful than one. Moreover, if there is a polynomial-time construction for the closure under intersection, then $O(\log n)$-many bits have the same power as one. We examine the subclass $AmpMP$ of languages whose $MP$ representations can be ``amplified,'' showing that $BP[\oplus P] \subseteq AmpMP$, and that for any subclass $\cal C$ of $AmpMP$ which is closed under polynomial-time many-one reducibility, $MP^{\cal C} = MP$. Hence $BP[\oplus P]$ is low for $MP$, and if $C_{=}P \subseteq AmpMP$, then $PP^{PP} = MP$. Finally, our work leads to a purely mathematical question about the size of integer-valued polynomials $p(x,y)$ which satisfy certain congruence relations, one which also matters to the theory of bounded-depth circuits. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-03.ps.Z %A Revankar, S. %A Sher, D. %T Supervised Image Segmentation %R 92-03 %D January 1992 %I Department of Computer Science, SUNY Buffalo %X Image segmentation is a process of dividing an image into meaningful regions. Human supervision of computer generated segmentation is essential for the tasks such as medical image analysis, surveillance-radar image analysis, etc. The decisions to be made based on the segmented images is often critical in such domains, and neither human imprecision nor the unreliability of automatic algorithms is tolerable. We develop a collaborative scheme that facilitates active human supervision of computer segmentation of images. Through this, we complement the reliability of a human expert with the precision of segmentation and deformable modeling algorithms that track a stationary or moving nonrigid object in a sequence of snap-shots of the scene. In our system, an expert user observes the computer generated segmentaion along with the original images in a user friendly graphics environment, and interactively indicates the wrongly classified regions either by {\it pointing} or by {\it circling.} The precise boundaries of indicated regions are computed by studying original image properties at that region, and the human visual {\it attention distribution map} obtained from the published psychological and psychophysical research. All refinement operations are defined in the form of a linear functions so that the facilities such as error recovery, commutativity of operations, etc. are inherent to the system. We use the developed theory to extract heart walls from a sequence of two dimensional echocardiograms. Echocardiography is one of the most popular and safest methods of observing the heart. Segmentation of the heart wall in echocardiograms is essential to detect and quantize a large spectrum of heart abnormalities. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-09.ps.Z %A Chuang, E. %A Sher, D. %T Chi-square Tests for Feature Detection %R 92-09 %D May 1992 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-10.ps.Z %A Chuang, E. %A Sher, D. %T Evidence Representation & Combination in Low-level Vision %R 92-10 %D May 1992 %I Department of Computer Science, SUNY Buffalo %X We discuss using statistics from $\chi^{2}$ tests and summing rule$^{(1)}$ for evidence representation and combination in low-level vision. Feature likelihoods are represented by statistics which are assured from $\chi^{2}$ tests, and the relationship between these likelihoods and noise level is linear. Multiple statistics are combined by summing rule into statistics for larger features such as segments and polygons. This is justified because the sum on a set of independent $\chi^{2}$ random variables is also a $\chi^{2}$ random variable, and the geometric meaning of the sum is equal to the integration of these addends. Examples show that the performance of Hough transform for detecting line segments is significantly improved if statistics are included. Therefore, statistics which adopted as evidence can not only include uncertainty, but also help to avoid error in Hough transform. The summing rule can combine statistics from $\chi^2$ tests, and also can integrate the meanings of addends into the sum. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-12.ps.Z %A Ho, Tin Kam %T A Theory of Multiple Classifier Systems and Its Application to Visual Word Recognition %R 92-12 %D May 1992 %I Department of Computer Science, SUNY Buffalo %X Despite the success of many pattern recognition systems in constrained domains, problems that involve noisy input and many classes remain difficult. A promising direction is to use several classifiers simultaneously, such athat they can complement each other in correctness. This thesis is concerned with decisions combination in a multiple classifer system that is critical to its success. A multiple classifer system consists of a set of a set of classifers and a decision combination function. It is a preferred solution to a complex recognition problem because it allows simultaneous use of feature descriptors of many types, corresponding measures of similarity, and many classification procedures. It also allows dynamic selection, so that classifiers adapted to inputs of a particular type many be applied only when those inputs are encountered. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-13.ps.Z %A Revankar, S. %A Sher, D. %T Pattern Extraction by Adaptive Propagation of a Regional Threshold %R 92-13 %D June 1992 %I Department of Computer Science, SUNY Buffalo %X We describe a mehtod for pattern extraction by adaptive propogation of regional threshold to the rest of the image. In most imageds there is an easily thresholded region. We propagate the regional threshold to the entire image using a linear approximation of the image intensity gradients. This method is useful in the fields where the extraction of precise geometry of the binarized patterns is required, and in the fileds where continuous thin lines are to be extracted. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-14.ps.Z %A Khoubyari, S. %T The Application of Word Image Matching in Text Recogntion %R 92-14 %D June 1992 %I Department of Computer Science, SUNY Buffalo %X Printed text recognition usually involves recognizing individual characters, and assembling the results to recognize words and sentences. However, the performance of conventional character recognition systems tends to suffer in the presence of moderate levels of deradation in the text. A method is proposed that uses equivalence among frequent word images to derive hypotheses for each word using the available language statistics. Word images are clustered to determine equivalency. The attributes of the clusters, as well as the relationships among them matched with the same characteristics for the words in the language. The method requires no explicit training, and is fairly tolerant to image degradation. The results for several sample sizes are reported. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-18.ps.Z %A Sunder, S. %A He, X %T An NC Algorithm for Finding Minimum Weighted Completion Time Schedule on Series Parallel Graphs %R 92-18 %D July 1992 %I Department of Computer Science, SUNY Buffalo %X We present a parallel algorithm for solving the minimum weighted completion time scheduling problem for transitive series parallel graphs. The algorithm takes $O(\log^2 n)$ time with $O(n^3)$ processors on a CREW PRAM, where $n$ is the number of vertices of the input graph. This is the first NC algorithm for solving the problem. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-21.ps.Z %A Hexmoor, H. %A Lammens, J. %A Shapiro, S. %T An Autonomous Agent Architecture for Integrating Perception and Acting with Grounded Embodied Symbolic Reasoning %R 92-21 %D August 1992 %I Department of Computer Science, SUNY Buffalo %X We describe an agent architecture in terms of general principles of organization of components for an autonomous agent that functions in the world. The architecture is described independent of computational components of the agent that are used to generate behavior. It specifies an integration of explicit representation and reasoning mechanisms, embodied semantics through grounding symbols in perception and action, mechanisms for finding and maintaining a correspondence between symbols and sensory-perceived objects, and implicit representation of special-purpose mechanisms of sensory processing, perception, and motor control for the agent. We then present components that we place in our general architecture to build an agent that exhibits situated activity and learning. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-22.ps.Z %A Milun, D. %A Sher, D. %T Improving Edge Detectors on Compressed Images-A Trainable Markov Random Field Approach %R 92-22 %D September 1992 %I Department of Computer Science, SUNY Buffalo %X We use Markov random fields to improve the output of the thinned Sobel edge detector, applied to images compressed using the JPEG technique. JPEG compression saves a lot of file space, however it introduces correlated errors into the images. This is exactly a circumstnace for which our recently developed double neighborhood MRFs are suited (Milun and Sher, 1992a). Double neighborhood MRFs are constructed by sampling from pairs of original images together with noisy imagery This models the noise within the MRF probability density function without having to make assumptions about its form. This provides an easy to generate Markov random fields for annealing or other relaxation methods. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-24.ps.Z %A Jagota, A. %A Regan, K. %T Performance of MAX-CLIQUE Approximation Heuristics Under Description-Length Weighted Distributions %R 92-24 %D October 1992 %I Department of Computer Science, SUNY Buffalo %X We study the average performance of several neural-net heuristics applied to the problem of finding the size of the largest clique in an undirected graph. This function is NP-hard even to approximate within a constant factor in the worst case [S.~Arora, C.~Lund, et.al., FOCS '92], but the heuristics we study are known to do quite well on average for instances drawn from the uniform distribution on graphs of size $n$. We extend a theorem of M. Li and P. Vitanyi [Info. Proc. Lett. 5/92] to show that for instances drawn from the {\em universal distribution\/} m(x), the average-case performance of any approximation algorithm has the same order as its worst-case performance. The universal distribution is not computable or samplable. However, we give a realistic analogue q(x) which lends its elf to efficient empirical testing. Our results so far are: out of nine heuristics we tested, three did markedly worse under q(x) than under uniform distribution, but six others revealed little change. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-25.ps.Z %A Revankar, S. %A Sher, D. %T Constrained Contouring in the Polar Coordinates %R 92-25 %D October 1992 %I Department of Computer Science, SUNY Buffalo %X A constrained contour is an outline of a region of interest, obtained by linking edges under the constraints of connectivity, smoothness, image context and subjective or user specified constraints. We discuss a constrained contouring algorithm in polar coordinates to trace closed contours. The algorithm finds optimal contour locations in all radial direction according to the constraintsof context, distance from approximate contour and the image gradient. These optimal edge-points ordered according to their angular coordinates. We treat these edge points as the nodes of a graph of all possible contours. The links of the graph are weighted so that the shortest path between a poir of nodes is the smooth contour that traces maximum number of edge-points, and the shortest cycle in the graph gives the optimal contour. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-26.ps.Z %A Chakravarty, S. %A Gong, Y. %T An Algorithm for Diagnosing Two-Line Bridging Faults in Combinational Circuits %R 92-26 %D October 1992 %I Department of Computer Science, SUNY Buffalo %X A novel algorithm for diagnosing all {\em Two-Line Bridging Faults} in Combinational Circuits is presented. It assumes the {\em Wired-OR (Wired-AND)} model and uses: SOPS to represent the set of possible bridging faults making it space efficient; and a set of rules for dropping faults from the set of possible faults. The rules use {\em fault dictionaries}, not for bridging faults but, for {\em stuck-at} faults only. Experimental results point to: the computational feasibility of considering all two-line bridging faults while diagnosing combinational circuits; and the effectiveness of the algorithm. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-27.ps.Z %A Green, F. %A Kobler, J. %A Regan, K. %A Schwentick, T. %A Toran, J. %T The Power of the middle Bit of Number-P Function %R 92-27 %D October 1992 %I Department of Computer Science, SUNY Buffalo %X We introduce the class MP of languages $L$ which can be solved in polynomial time with the additional information of one bit from a $\#$P function $f$. We prove that the polynomial hierarchy and the classes $\mbox{\it Mod}_k\mbox{P}$, $k\geq2$, are low for this class. We show that the middle bit of $f(x)$ is as powerful as any other bit, and that a wide range of bits around the middle have the same power. By contrast, the $O(\log n)$ many least significant bits are equivalent to $\oplus$P [R.~Beigel, J.~Gill, and U.~Hertrampf, 1990], and we show that the $O(\log n)$ many most significant bits are equivalent to PP; hence these bits are probably weaker. We study also the subclass AmpMP of languages whose MP representations can be ``amplified,'' showing that $\mbox{BPP}^{\oplus{\mbox{P}}}\subseteq \mbox{AmpMP}$, and that important subclasses of AmpMP are low for MP. We translate some of these results to the area of circuit complexity using MidBit (middle bit) gates. A MidBit gate over $w$-many inputs $x_1,\dots,x_w$ is a gate which outputs the value of the $\lfloor\log(w)/2\rfloor^{\rm th}$ bit in the binary representation of the number $\sum_{i=1}^wx_i$. We show that every language in ACC can be computed by a family of depth-2 deterministic circuits of size $2^{(\log n)^c}$ with a MidBit gate at the root and AND-gates of fan-in $(\log n)^c$ at the leaves. This result improves the known upper bounds for the class ACC. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-30.ps.Z %A Sher, David B. %A Cheung, Chris Y. %T Constructing Noise-Reducing Operators from Training Images %R 92-30 %D November 1992 %I Department of Computer Science, SUNY Buffalo %X We discuss constructing non-linear noise reduction images. We extract from the training set a probability distribution over local neighborhoods. Our operator changes pixel values when such a change turns a low probability neighborhood into high probability one. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-32.ps.Z %A Chakravarty, S. %A Theophilopoulos, G. %T Computing Robust Test for Stuck-open Faults from Stuck-at Test Sets %R 92-32 %D December 1992 %I Department of Computer Science, SUNY Buffalo %X An experimental system for computing robust tests for stuck-open faults in static CMOS circuits is presented. It constructs robust test-pairs from tests for stuck-at faults by identifying several classes of FETs. Robust tests for stuck-open faults in FETs belonging to these classes are constructed from any stuck-at test set by carefully constructing initialization vectors. Initialization vectors are constructed by examining the "parity" of the paths in the circuit. Robust tests for additional faults are identified using stuck-open fault simulaton. Experimental results show that the system can compute robust tests for a "very large" percentage of the stuck- open faults in a "reasonable" time. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-33.ps.Z %A Jagota, A. %T Approximating Maximum Clique with a Hopfield Network %R 92-33 %D December 1992 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-01.ps.Z %A Sarnath, R. %T Doubly logarithmic time parallel sorting %R 93-01 %D January 1993 %I Department of Computer Science, SUNY Buffalo %X Recently, attempts have been made to separate the problem of parallel sorting from that of list ranking, in order to get around the well known $\Omega (\log n/\log \log n)$ lower bound. These approaches have been of two kinds - {\em chain sorting} and {\em padded sorting}. Here we present nearly optimal, comparison based {\em padded} sorting algorithms that run in average case time $O(\frac{1}{\epsilon ^2} + \frac{1}{\epsilon } \log \log n)$ using $n^{1+\epsilon }$ processors, and $O(n^{1+\epsilon })$ space, on an Common CRCW PRAM.From these results, algorithms for {\em chain sorting} within the same time and processor bounds can be easily obtained. Using a similar approach, we also give an $O(1)$ average case time, comparison based algorithm for finding the largest of $n$ items using a linear number of processors. The algorithm for finding the maximum, runs on a Common CRCW PRAM using only $n^{3/4}$ cells of shared memory. Finally, we obtain randomised algorithms for these problems that run on Common/Tolerant CRCW PRAMs, and also satisfy the above time and processor bounds. As a consequence of our results on sorting, we can also show that the problem of sorting arbitrary input sequences can be reduced to that of sorting integer inputs, within the above mentioned time and processor bounds. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-02.ps.Z %A Sarnath, R. %T Lower bounds for padded sorting and approximate counting %R 93-02 %D January 1993 %I Department of Computer Science, SUNY Buffalo %X We examine the relationship between running time and error of parallel sorting algorithms. This is done by applying Hastad's main lemma to relate the size depth and error of simple circuits, that sort an input of 0's and 1's. As a consequence, we obtain lower bounds for approximate counting as well. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-04.ps.Z %A Chakravarty, Sreejit %A Sivaprakasm, Suresh %T I_DDQ Measurement Based Diagnosis of Bridging Faults in Full %R 93-04 %D February 1993 %I Department of Computer Science, SUNY Buffalo %X An algorithm for diagnosing two node bridging faults in static CMOS combinational circuits(full scan circuits) is presented. This algorithm uses results from $I_{DDQ}$ testing. The bridging faults considered can be between nodes that are outputs of a gate or internal nodes of a gates. Experimental results on all the ISCAS89 circuits show that $I_{DDQ}$ measurement based diagnosis using a small number of randomly generated vectors, is very effective. Moreover, it is computationally feasible to diagnose such a large number of bridging faults when $I_{DDQ}$ measurement is used. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-05.ps.Z %A Fenner, Stephen %A Homer, Steve %A Ogiwara, Mitsunori %A Selman, Alan L. %T On Using Oracles That Compute Values %R 93-05 %D February 17, 1993 %I Department of Computer Science, SUNY Buffalo %X This paper focuses on complexity classes of partial functions that are computed in polynomial time with oracles in NPMV, the class of all multivalued partial functions that are computable nondeterministically in polynomial time. Concerning deterministic polynomial-time reducibilities, it is shown that \begin{enumerate} \item A multivalued partial function is polynomial-time computable with $k$ adaptive queries to NPMV if and only if it is polynomial-time computable via $2^k-1$ nonadaptive queries to NPMV. \item A characteristic function is polynomial-time computable with $k$ adaptive queries to NPMV if and only if it is polynomial-time computable with $k$ adaptive queries to NP. \item Unless the Boolean hierarchy collapses, for every $k$, $k$ adaptive (nonadaptive) queries to NPMV is different than $k+1$ adaptive (nonadaptive) queries to NPMV. \end{enumerate} Nondeterministic reducibilities, lowness and the difference hierarchy over NPMV are also studied. The difference hierarchy for partial functions does not collapse unless the Boolean hierarchy collapses, but, surprisingly, the levels of the difference and bounded query hierarchies do not interleave (as is the case for sets) unless the polynomial hierarchy collapses. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-09.ps.Z %A Chakravarty, Sreejit %A Gong, Yiming %T A Diagnostic Simulation Algorithm for Stuck-at Faults in Combinational Circuits %R 93-09 %D March 1993 %I Department of Computer Science, SUNY Buffalo %X Two faults are said to be equivalent, w.r.t a test set $T$, iff they cannot be distinguished by any test in $T$. The sizes of the equivalence classes are used as a basis for comparing the diagnostic capability of two given test sets. A novel algorithm for computing the Equivalence Classes of stuck-at faults, in combinational(full scan) circuits, w.r.t a given test set is presented. Experimental results presented show the algorithm to be more time and space efficient than a previously known algorithm. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-10.ps.Z %A Hexmoor, H. %A Lammens, J. %A Shapiro, S. C. %T Embodiment in GLAIR: A Grounded Layered Architecture with Integrated Reasoning for Autonomous Agents %R 93-10 %D February 1993 %I Department of Computer Science, SUNY Buffalo %X In order to function robustly in the world, autonomous agents need to assimilate concepts for physical entities and relations, grounded in perception and action. They also need to assimilate concepts for perceptual properties like color, shape, and weight, and perhaps eventually even for nonphysical objects like unicorns. The process of acquiring concepts that carry meaning in terms of the agent's own physiology we call embodiment. Unlike current robotic agents, those endowed with embodied concepts will more readily understand high level instructions. As a consequence, these robots won't have to be instructed at a low level. We have developed an autonomous agent architecture that facilitates embodiment of action and perception, and accommodates embodied concepts for both physical and nonphysical objects, properties, and relations. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-13.ps.Z %A Lammens, J. %A Hexmoor, H. %A Shapiro, S. C. %T Of Elephants and Men %R 93-13 %D April 1993 %I Department of Computer Science, SUNY Buffalo %X In his "elephant paper", Brooks criticized the ungroundedness of traditional symbol systems, and proposed physically grounded systems as an alternative, in particular the subsumption architecture. Although we are still struggling with many of the issues involved, we believe we have some contributions to make towards solving some of the open problems with physically grounded systems, particularly with respect to whether or how to integrate the old with the new. In this paper we describe an agent architecture that specifies an integration of explicit representation and reasoning mechanisms, embodied semantics through grounding symbols in perception and action, and implicit representations of special-purpose mechanisms of sensory processing, perception, and motor control. We then present components that we place in our general architecture to build agents that exhibit situated activity and learning, and finally a physical agent implementation and two simulation studies. The gist of our paper is that the Brooksian behavior generation approach goes a long way towards modeling elephant behavior, which we find very interesting, but that in order to model more deliberative behavior we may also need something else. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-15.ps.Z %A Hexmoor, H. %A Lammens, J. %A Caicedo, G. %A Shapiro, S. C. %T Behavior Based AI, Cognitive Processes, and Emergent Behaviors in Autonomous Agents %R 93-15 %D April 1993 %I Department of Computer Science, SUNY Buffalo %X Behavior based AI has questioned the need for modeling intelligent agency using generalized cognitive modules for perception and behavior generation. Behavior based AI has demonstrated successful interactions in unpredictable environments in the mobile robot domain. This has created a gulf between "traditional" approaches to modeling intelligent agency and behavior based approaches. We present an architecture for intelligent autonomous agents which we call GLAIR (Grounded Layered Architecture with Integrated Reasoning). GLAIR is a general multi-level architecture for autonomous cognitive agents with integrated sensory and motor capabilities. GLAIR offers an "unconscious" layer for modeling tasks that exhibit a close affinity between sensing and acting, i.e., behavior based AI modules, and a "conscious" layer for modeling tasks that exhibit delays between sensing and acting. GLAIR provides learning mechanisms that allow for autonomous agents to learn emergent behaviors and add them to their repertoire of behaviors. In this paper we will describe the principles of GLAIR and some systems we have developed that demonstrate how GLAIR based agents acquire and exhibit a repertoire of behaviors at different cognitive levels. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-16.ps.Z %A Ali, S. %A Shapiro, S. C. %T Natural Language Processing Using a Propositional Semantic Network with Structured Variables %R 93-16 %D May 7, 1993 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-18.ps.Z %A Sivaprakasam, S. %T Performance Enhancements in SunOS NFS %R 93-18 %D May 1993 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-20.ps.Z %A Choi, Joongmin %T Experience-Based Learning In Deductive Reasoning Systems %R 93-20 %D May 1993 %I Department of Computer Science, SUNY Buffalo %X General knowledge is widely applicable, but relatively slow to apply to any particular situation. Specific knowledge can be used rapidly where it applies, but is only narrowly applicable. We present an automatic scheme to migrate general knowledge to specific knowledge during reasoning. This scheme relies on a nested rule representation which retains the rule builder's intentions about which of the possible specializations of the rule will be most useful. If both general and specific knowledge is available and applicable, a system may be slowed down by trying to use the general knowledge as well as, or instead of, the specific knowledge. However, if general knowledge is purged from the system after migration, the system will lose the flexibility of being able to handle different situations. To retain the flexibility without paying the price in speed, a shadowing scheme is presented that prevents general knowledge from being used when specific knowledge migrated from it is available and applicable. The combination of knowledge migration and knowledge shadowing allows a deductive reasoning system to learn from and exploit previous experience. Experience is represented by the instance relationship between the general knowledge and the specific knowledge migrated from it. We also present techniques for implementing efficient rules of inference in natural deduction systems by caching and recalling the history of rule activation steps that alleviate duplicate pattern matchings and binding conflict resolutions. To reduce the complexity of manipulating rule activation steps from exponential to polynomial, methods of distributing the information about rule activation steps are developed that minimize the number of activation steps and the number of substitution compatibility tests among shared variables. An implementation of these schemes in a network-based reasoning system is discussed. Test results are shown that demonstrate the predicted benefits of these ideas. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-07.ps.Z %A Shu, Wei %A Wu, Min-You %T Sparse Implementation of Revised Simplex Algorithms on Parallel Computers %R 93-07 %D July 01, 1993 %I Department of Computer Science, SUNY Buffalo %X Parallelizing sparse simplex algorithms is one of the most challenging problems. Because of very sparse matrices and very heavy communication, the ratio of computation to communication is extremely low. It becomes necessary to carefully select parallel algorithms, partitioning patterns, and communication optimization to achieve a speedup. Two implementations on Intel hypercubes are presented in this paper. The experimental results show that a nearly linear speedup can be obtained with the basic revised simplex algorithm. However, the basic revised simplex algorithm has many fill-ins. We also implement a revised simplex algorithm with LU decomposition. It is a very sparse algorithm, and it is very difficult to achieve a speedup with it. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-24.ps.Z %A Regan, Kenneth W. %T Efficient Reductions from NP to Parity Using Error-Correcting Codes (preliminary version) %R 93-24 %D June 12, 1993 %I Department of Computer Science, SUNY Buffalo %X This paper proves that every language in NP is recognized by an RP[$\oplus$P] machine whose time complexity is quasilinear, apart from the time to verify witnesses. The results significantly improve the number of random bits, success probability, and running time of Valiant and Vazirani's original construction ({\em Theor. Comp. Sci.\/} 47:85--93, 1986), and beat both the $2n$ random bits and time/success tradeoff in subsequent methods based on universal hashing. Questions of further improvements are connected to open problems in the theory of error-correcting codes. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-25.ps.Z %A Regan, Kenneth W. %T On the Difference Between Turing Machine Time Random-Access Machine Time %R 93-25 %D July 12, 1993 %I Department of Computer Science, SUNY Buffalo %K Computational complexity, theory of computation, Turing machines, random-access machines, models, simulation, finite automata %X We introduce a model of computation called the {\em Block Move\/} (BM) model. The BM extends the {\em Block Transfer\/} (BT) model of Aggarwal, Chandra, and Snir (FOCS 1987), who studied time complexity under various {\em memory access cost functions\/} ranging from $\mu_1(a) := a$ to $\mu_{\log}(a) := \ceiling{\log_2 a}$. We show that up to factors of $\log t$ in the total running time $t$, BMs under $\mu_1$ are equivalent to multitape Turing machines, and BMs under $\mu_{\log}$ are equivalent to log-cost RAMs. We also prove that for any well-behaved $\mu$ the BM classes $\dmutime[t(n)]$ form a tight deterministic time hierarchy. Whether there is any hierarchy at all when $\mu$ rather than $t$ varies is tied to long-standing open problems of determinism vs. nondeterminism. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-26.ps.Z %A Osorio, Mauricio %A Jayaraman, Bharat %T Subset Assertions and Negation-As Failure %R 93-26 %D July 27, 1993 %I Department of Computer Science, SUNY Buffalo %K Subset Assertions, Transitive Closures, Memoization, Negation-As-Failure, Stratified Semantics, Well-Founded Semantics %X Subset assertions provide a declarative and natural means for expressing solutions to many problems involving sets. This paper is motivated by the use of subset assertions for formulating transitive closures and solving containment constraints in applications of graph traversal and program analysis. In these applications, circular containment constraints may arise, for which we propose an operational strategy based upon memoization and reexecution of function calls. We provide formal declarative and operational semantics for this class of subset assertions. One of the main technical results of this paper is a succinct translation of subset assertions into normal program clauses [L87] such that the stratified semantics of the resulting normal programs coincides with the declarative semantics of subset assertions. This translation is interesting because the operational semantics of subset assertions appears to be very different from that of normal programs---due to the setof-like capability and the need of reexecution for subset assertions, both of which are absent in normal program clauses. (However this translation is not an acceptable implementation of subset assertions due to its inefficiency.) We also discuss the connection between our proposed declarative semantics and recent approaches such as stable and well-founded semantics. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-21.ps.Z %A Hemaspaandra, Edith %A Naik, Ashish V. %A Ogiwara, Mitsunori %A Selman, Alan L. %T P-Selective Sets, and Reducing Search to Decision vs. Self-Reducibility %R 93-21 %D July 30, 1993 %I Department of Computer Science, SUNY Buffalo %K computational, complexity, complexity classes, p-selective sets, self-reducible, search reduces to decision, proof systems, cheatable, left cuts %Y F.1.2 Modes of Computation F.1.3 Complexity Classes (see also F.2) %X We obtain several results that distinguish self-reducibility of a language $L$ with the question of whether search reduces to decision for $L$. These include: \begin{itemize} \item[(i)] If ${\rm NE} \ne {\rm E}$, then there exists a set $L$ in ${\rm NP} - {\rm P}$ such that search reduces to decision for $L$, search does {\em not} nonadaptively reduces to decision for $L$, and $L$ is {\em not} self-reducible. \item[(ii)] If ${\rm UE} \ne {\rm E}$, then there exists a language $L \in {\rm UP} - {\rm P}$ such that search nonadaptively reduces to decision for $L$, but $L$ is {\em not} self-reducible. \item[(iii)] If ${\rm UE} \cap \mbox{co-{\rm UE}} \ne {\rm E}$, then there is a disjunctive self-reducible language $L \in {\rm UP} - {\rm P}$ for which search does {\em not} nonadaptively reduce to decision. \item[(iv)] There is an oracle relative to which satisfiable assignments cannot be computed nonadaptively relative to SAT. \end{itemize} We prove that if ${\rm NE} \not\subseteq {\rm BPE}$, then there is a language $L \in {\rm NP} - {\rm BPP}$ such that $L$ is randomly self-reducible, {\em not} nonadaptively randomly self-reducible, and {\em not} self-reducible. We obtain results concerning tradeoffs in multiprover interactive proof systems, and obtain results that distinguish checkable languages from those that are nonadaptively checkable. Many of our results depend on new techniques for constructing p-selective sets. We obtain a p-selective set that is {\em not} $\leq_{tt}^{\rm P}$-equivalent to any tally language, and we show that if ${\rm P} = {\rm PP}$, then every p-selective set is $\leq_T^{\rm P}$-equivalent to a tally language. Similarly, if ${\rm P} = {\rm NP}$, then every cheatable set is $\leq_m^{\rm P}$-equivalent to a tally language. We construct a recursive p-selective tally set that is {\em not} cheatable. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-22.ps.Z %A Sarnath, R. %T A Randomized Parallel Algorithm for dfa-minimization %R 93-22 %D August 03, 1993 %I Department of Computer Science, SUNY Buffalo %K Parallel-algorithms, Partitioning, DFA minimization %X The problem of finding the coarsest partition of a set $S$ with respect to another partition of $S$ and one or more functions on $S$ has several applications, one of which is the state minimization of finite state automata. The problem has a well known $O(n \log n)$ sequential algorithm. In this paper, we present efficient parallel randomised algorithms for the problem. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-31.ps.Z %A Lammens, Johan M. %A Shapiro, Stuart C. %T Learning Symbolic Names for Perceived Colors %R 93-31 %D August 16, 1993 %I Department of Computer Science, SUNY Buffalo %K color perception, learning, symbol grounding, vision, knowledge representation %Y I.2.10 Vision and Scene Understanding (see also I.4.8,I.5)-Intensity, color, photometry and thresholding I.2.4 Knowledge Representation Formalisms and Methods I.2.9 Robotics-Sensors %X We are working on a computational model of color perception and color naming, which can be seen as an instance of symbol grounding in the domain of color, or as an attempt to provide an artificial intelligent agent with embodied concepts of color. We discuss two areas of the model where learning is used: learning a non-linear mapping between two color spaces, and learning a relation between color space coordinates and a set of symbolic color names. We have used a traditional error back-propagation learning algorithm for the first problem, and are considering several different learning paradigms for the second problem, ranging from traditional clustering techniques to an experimental ``space warp'' method. Using learning gives us a relatively easy way to determine a non-linear transformation of spaces in the first case, and increases the flexibility of the approach with respect to different application needs (and languages) in the second case. We discuss the learning methods used or considered and the problems encountered. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-30.ps.Z %A Bar-Yehuda, R. %A Dabholkar, V. %A Govindarajan, K. %A Sivakumar, D. %T Randomized Local Approximations with Applications to the MAX-CLIQUE Problem %R 93-30 %D August 17, 1993 %I Department of Computer Science, SUNY Buffalo %X We present a heuristic scheme for finding a clique of maximum size or weight. Our heuristic scheme uses approximation techniques for the weighted vertex cover problem, due to R.~Bar-Yehuda and S.~Even \cite{BE85}. The approach is based on the notion of making incremental progress towards finding a clique. At each step of our algorithm, one or more {\em local optimization} techniques are attempted, and when these techniques do not make progress, we perform {\em local approximations}. In the local approximation step, the algorithm selects a heuristic from a set of heuristics, depending on the characteristics of the graph at that stage. Once a solution is constructed, we attempt to iteratively refine the solution. Randomization plays a key role at various steps of our algorithm. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-28.ps.Z %A Hemaspaandra, Lane A. %A Naik, Ashish V. %A Ogiwara, Mitsunori %A Selman, Alan L. %T Computing Solutions Uniquely Collapses the Polynomial Hierarchy %R 93-28 %D August 17, 1993 %I Department of Computer Science, SUNY Buffalo %K computational complexity, complexity classes, NPSV, NPMV, refinements, selectivity, lowness, advice classes, nonuniform classes, polynomial hierarchy %Y F.1.2 Modes of Computation F.1.3 Complexity Classes (see also F.2) %X Is there a {\em single-valued\/} NP function that, when given a satisfiable formula as input, outputs a satisfying assignment? That is, can a nondeterministic function cull just one satisfying assignment from a possibly exponentially large collection of assignments? We show that if there is such a nondeterministic function, then the polynomial hierarchy collapses to its second level. As the existence of such a function is known to be equivalent to the statement ``every multivalued NP function has a single-valued NP refinement,'' our result provides the strongest evidence yet that multivalued NP functions cannot be refined. We prove our result via theorems of independent interest. We say that a set $A$ is NPSV-selective (NPMV-selective) if there is a 2-ary partial function in NPSV (NPMV, respectively) that decides which of its inputs (if any) is ``more likely'' to belong to $A$; this is a nondeterministic analog of the recursion-theoretic notion of the semi-recursive sets and the extant complexity-theoretic notion of P-selectivity. Our hierarchy collapse follows by combining the easy observation that every set in NP is NPMV-selective with either of the following two theorems that we prove: (1) If $A \in {\rm NP}$ is NPSV-selective, then $A \in ({\rm NP} \cap {\rm coNP})/{\rm poly}$. (2) If $A \in {\rm NP}$ is NPSV-selective, then $A$ is Low$_2$. To wit, either result implies that if every set in NP is NPSV-selective, then the polynomial hierarchy collapses to its second level, ${\rm NP}^{\rm NP}$. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-29.ps.Z %A Hemaspaandra, Lane A. %A Hoene, Albrecht %A Naik, Ashish V. %A Ogiwara, Mitsunori %A Selman, Alan L. %A Thierauf, Thomas %A Wang, Jie %T Selectivity: Reductions, Nondeterminism, and Function Classes %R 93-29 %D August 18, 1993 %I Department of Computer Science, SUNY Buffalo %K computational complexity, complexity classes, NPSV, NPMV, refinements, selectivity, lowness, advice classes, nonuniform classes, polynomial hierarchy, NPSV-total, reductions, equivalences %Y F.1.2 Modes of Computation F.1.3 Complexity Classes (see also F.2) %X A set is P-selective if there is a polynomial-time semi-decision algorithm for the set---an algorithm that given any two strings decides which is ``more likely'' to be in the set. This paper studies two natural generalizations of P-selectivity: the NP-selective sets and the sets reducible or equivalent to P-selective sets via polynomial-time reductions. We establish a strict hierarchy among the various reductions and equivalences to P-selective sets. We show that the NP-selective sets are in $({\rm NP} \cap {\rm coNP}/{rm Poly}$ are extended low, and those in NP are Low$_2$; we also show that NP-selective sets cannot be NP-complete unless ${\rm NP} = {\rm coNP}$. By studying more general notions of nondeterministic selectivity, we conclude that all multivalued NP functions have single-valued NP refinements only if the polynomial hierarchy collapses to its second level. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-33.ps.Z %A Regan, Kenneth W. %T Machine Models and Linear Time Complexity %R 93-33 %D August 16, 1993 %I Department of Computer Science, SUNY Buffalo %K Complexity theory, machine models, simulations, complexity classes, linear time, lower bounds %Y F.1.1 Models of Computation (see also F.4.1)-Relations among models F.1.3 Complexity Classes (see also F.2)-Relations among complexity classes F.1.3 Complexity Classes (see also F.2)-Relations among complexity measures F.4.3 Formal Languages (D.3.1)-Classes defined by resource-bounded automata %X This article first surveys known results on simulations among commonly-studied models of sequential computation, and observes that the problem of whether these models can simulate each other in linear time is essentially wide open. Then the status of research on nonlinear lower bounds for natural problems in these models is examined. The author's {\it Block Move\/} (BM) model [K. Regan, UBCS TR 92-28, submitted for publication] is described in terms of an ``editing game'' on strings, with a cost parameter $\mu$ that reflects features of real machines. Results comparing BM and TM and RAM complexity classes are given, an avenue for possible nonlinear lower bounds is sketched, and several open problems are posed. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-35.ps.Z %A Regan, Kenneth W. %T A New Parallel Vector Model, With Exact Characterizations of NC^k %R 93-35 %D August 17, 1993 %I Department of Computer Science, SUNY Buffalo %K Computational complexity, machine models, parallel computation, vector machines, circuits %Y F. Theory of Computation F.1.1 Models of Computation (see also F.4.1)-Unbounded-action devices (e.g., cellular automata, circuits, networks of machines) F.1.2 Modes of Computation-Alternation and nondetermination F.1.3 Complexity Classes (see also F.2)-Relations among complexity classes %X This paper develops a new and natural parallel vector model, and shows that for all $k \geq 1$, the languages recognizable in $O(\log^k n)$ time and polynomial work in the model are exactly those in NC$^k$. Some improvements to other simulations in the literature of parallel models and reversal complexity are given. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-34.ps.Z %A Naik, Ashish V. %A Regan, Kenneth W. %A Sivakumar, D. %T Quasilinear Time Complexity Theory %R 93-34 %D August 20, 1993 %I Department of Computer Science, SUNY Buffalo %K Computational complexity, complexity classes, linear time, search vs. decision, error-correcting codes %Y F. Theory of Computation F.1.3 Complexity Classes (see also F.2)-Complexity hierarchies F.1.3 Complexity Classes (see also F.2)-Relations among complexity classes %X This paper furthers the study of quasi-linear time complexity initiated by Schnorr [J. ACM 25:136--145, 1976] and Gurevich and Shelah [Proc. Logic at Botik '89, LNCS 363, pp108--118, 1989]. We show that the fundamental properties of the polynomial-time hierarchy carry over to the quasilinear-time hierarchy. Whereas all previously known versions of the Valiant-Vazirani reduction from NP to parity run in quadratic time, we give a new construction using error-correcting codes that runs in quasilinear time. We show, however, that the important equivalence between search problems and decision problems in polynomial time is unlikely to carry over: if search reduces to decision for SAT in quasi-linear time, then all of NP is contained in quasi-polynomial time. Other connections to work by Stearns and Hunt [Math. Sys. Thy. 23:209--225, 1990] on ``power indices'' of NP languages are made. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-37.ps.Z %A Hexmoor, Henry H. %A Lammens, Johan M. %A Shapiro, Stuart C. %T An Autonomous Agent Architecture for Integrating "Unconscious" and "Conscious", Reasoned Behaviors %R 93-37 %D August 24, 1993 %I Department of Computer Science, SUNY Buffalo %K GLAIR, autonomous agents, color perception, learning behaviors %Y I.2.0 General-Cognitive simulation I.2.4 Knowledge Representation Formalisms and Methods I.2.6 Learning (see also K.3.2) I.2.9 Robotics I.2.10 Vision and Scene Understanding (see also I.4.8,I.5)-Intensity, color, photometry and thresholding %X We outline an autonomous agent architecture, GLAIR, and one of its instantiations, the Air Battle Simulation game. Our architecture models agents with ``conscious'' and ``unconscious'' behaviors. The architecture provides for grounded symbolic representations through embodied perception, and provides a mechanism for learning behaviors. We discuss how color perception fits into the architecture as a particular case of grounding and embodiment. We also discuss how the Air Battle Simulation implements an autonomous agent conforming to the architecture, and how knowledge migration and various other features of the architecture apply to it. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-36.ps.Z %A Naik, Ashish V. %A Baveja, Alok %A Batta, Rajan %A Caulkins, Jonathan P. %T Scheduling Crackdowns on Illicit Drug Markets %R 93-36 %D August 30, 1993 %I Department of Computer Science, SUNY Buffalo %K Crackdown Scheduling, NP-completeness, Approximation Algorithms %Y F.2.2 Nonnumerical Algorithms and Problems (see also E.2-5,G.2,H.2-3)-Sequencing and scheduling F.1.1 Models of Computation (see also F.4.1)-Computability theory E.5.1 Optimization %X This paper presents an analytical approach for scheduling crackdowns on street-corner drug markets. The crackdown scheduling problem is shown to be NP-Complete. A dynamic programming formulation is presented with an exponential time optimal algorithm. We then provide efficient optimal algorithms for several special cases and approximation algorithms for the general case. These results show that the optimal strategy is to give priority to markets that take longer to bring down and which require low levels of post-crackdown maintenance. The results are then extended to incorporate dealer displacement between drug markets. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-23.ps.Z %A Mackey, Niloufer %T Hamilton and Jacobi Meet Again: Quaternions and the Eigenvalue Problem %R 93-23 %D May 15, 1993 %I Department of Computer Science, SUNY Buffalo %X The algebra isomorphism between M4(R)and H \tensor H, where H is the algebra of quaternions, has unexpected computational payoff: it helps construct an orthogonal similarity that 2x2 block-diagonalizes a 4x4symmetric matrix. Replacing plane rotations with these more powerful 4x4 rotations leads to a quaternion-Jacobi method in which the `weight' of 4 elements (in a 2x2 block) is transferred all at once onto the diagonal. Quadratic convergence sets in sooner, and the new method requires at least one fewer sweep than plane-Jacobi methods. An analogue of the sorting angle for plane rotations is developed for these 4x4 rotations. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-39.ps.Z %A Sher, David B. %A Wafford, Charlotte E. %A Milun, Davin %T Relating Gibbs distributions to empirically derived marginal distributions for image analysis %R 93-39 %D November 23, 1993 %I Department of Computer Science, SUNY Buffalo %K Markov random field; Gibbs distribution; ICM; simulated annealing; MAP; pseudoinverse %Y I.2.10 Vision and Scene Understanding (see also I.4.8,I.5) I.2.6 Learning (see also K.3.2)-Parameter learning %X The log marginal probability ratios of a Gibbs distribution over an 8 connected neighborhood system is a linear function of its 66 clique weights. We empirically determine log marginal probability ratios of artificial noiseless images and use the pseudoinverse method to compute the closest Gibbs distribution. We compare these Gibbs distributions to {\it ad hoc} distributions suggested in the literature and to the empirical marginals from which they are derived. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-02.ps.Z %A Cai, Jin-Yi %A Naik, Ashish V. %A Selman, Alan L. %T On P-selective sets and Adaptive versus Nonadaptive Queries to NP %R 94-02 %D February 02, 1994 %I Department of Computer Science, SUNY Buffalo %K p-selective, NP truth-table, adaptive queries, nonadaptive queries %Y F.1.1 Models of Computation (see also F.4.1)-Computability theory F.1.2 Modes of Computation-Alternation and nondetermination F.1.3 Complexity Classes (see also F.2) F.1.3 Complexity Classes (see also F.2)-Relations among complexity classes %X We show that if there exists a p-selective set that is NP-hard under truth-table reductions, then every function that is computable in polynomial time by truth-table access to an NP oracle is computable in polynomial time by making at most $O(\log n)$ queries to an NP oracle. As a consequence, it follows that if there exists a tt-hard p-selective set for NP, then for all $k>0,~\np\subseteq DTIME[2^{n/\log^k n}]$. Reporting progress on the question of whether every function that is computable in polynomial time by truth-table access to an NP oracle is computable in polynomial time by making at most $O(\log n)$ queries to an NP oracle, we show that if there is a constant $k$ such that \[ {\rm PF}_{{\mbox{$n^k$-tt}}}^{\rm NP} \subseteq {\rm PF}^{\rm NP}[k\lceil \log n \rceil -1], \] then P = NP. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-38.ps.Z %A Chakravarty, Sreejit %T A Study of Theoretical Issues in the Synthesis of Delay Fault Testable Circuits %R 93-38 %D October 26, 1993 %I Department of Computer Science, SUNY Buffalo %K Delay Fault Testable Circuits; Logic Optimization; Logic Synthesis; Testability Enhancing Transformations; Testability Preserving Transformations %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-40.ps.Z %A Jayaraman, Bharat %A Osorio, Mauricio %A Moon, Kyonghee %T Partial Order Logic Programming %R 93-40 %D November 30, 1993 %I Department of Computer Science, SUNY Buffalo %K lattices, partial orders, functional programming, logic programming, sets, monotonic functions, memoization, fixed-point computation %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-01.ps.Z %A Ali, Syed S. %T A "Natural Logic" for Natural Language Processing and Knowledge Representation %R 94-01 %D February 09, 1994 %I Department of Computer Science, SUNY Buffalo %K Natural Language Processing, Knowledge Representation, Surface Reasoning, Subsumption, ANALOG %Y I.2.0 General-Cognitive simulation I.2.3 Deduction and Theorem Proving-Deduction (e.g., natural, rule-based) I.2.4 Knowledge Representation Formalisms and Methods-Predicate logic I.2.4 Knowledge Representation Formalisms and Methods-Representation languages I.2.4 Knowledge Representation Formalisms and Methods-Semantic networks I.2.7 Natural Language Processing-Discourse I.2.7 Natural Language Processing-Language generation I.2.7 Natural Language Processing-Language parsing and understanding %X We define a knowledge representation and inference formalism that is well suited to natural language processing. In this formalism every subformula of a formula is closed. We motivate this by observing that any formal language with (potentially) open sentences is an inappropriate medium for the representation of natural language sentences. Open sentences in such languages are a consequence of the separation of variables from their quantifier and type constraints, typically in the antecedents of rules. This is inconsistent with the use of descriptions and noun phrases corresponding to variables in language. Variables in natural language are constructions that are typed and quantified as they are used. A consequence of this is that variables in natural language may be freely reused in dialog. This leads to the use of pronouns and discourse phenomena such as ellipsis involving reuse of entire subformulas. We present an augmentation to the representation of variables so that variables are not atomic terms. These ``structured'' variables are typed and quantified as they are defined and used. This leads to an extended, more ``natural'' logical language whose use and representations are consistent with the use of variables in natural language. Structured variables simplify the tasks associated with natural language processing and generation, by localizing noun phrase processing. The formalism is defined in terms of a propositional semantic network, starting from nodes and arcs connecting nodes, subsumption, matching, to inference. It allows the resolution of some representational difficulties with certain classes of natural language sentences (e.g. the so-called ``donkey'' sentences and sentences involving branching quantifiers). Reuse phenomena, such as pronominalization and ellipsis, are captured in the representation by structure-sharing. A major advantage of this structured representation of variables is that it allows a form of terminological and derived subsumption similar to surface reasoning in natural language. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-03.ps.Z %A Chakravarty, Sreejit %A Gong, Yiming %T Voting Model Based Diagnosis of Bridging Faults in Combinational Circuits %R 94-03 %D February 15, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-42.ps.Z %A Chakravarty, Sreejit %A Thadikaran, Paul J. %T On Iddq Measurement Based Analysis of Bridging Faults in CMOS Circuits %R 93-42 %D November 1993 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-04.ps.Z %A Kumar, Deepak %T From Beliefs and Goals to Intentions and Actions: An Amalgamated Model of Inference and Acting %R 94-04 %D March 11, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-23.ps.Z %A Hexmoor, Henry %A Nute, Donald %T Methods for deciding what to do next and learning %R 92-23 %D September, 1992 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-06.ps.Z %A Chakravarty, Sreejit %A Dabholkar, Vinay %T Minimizing Power Dissipation in Scan Circuits During Test Application %R 94-06 %D April 15, 1994 %I Department of Computer Science, SUNY Buffalo %K Power dissipation, full isolated scan, full integrated scan %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-07.ps.Z %A Hexmoor, Henry H. %T What are routines good for? %R 94-07 %D April 15, 1994 %I Department of Computer Science, SUNY Buffalo %K We show how agents with routines a) use them to guide their everyday activity, b) use them to enrich their abstract concepts about acts. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-08.ps.Z %A Ahmad, Ishfaq %A Wu, Min-You %A Yang, Jaehyung %A Ghafoor, Arif %T A Performance Assessment of Express on the iPSC/2 and iPSC/860 Hypercube Computers %R 94-08 %D April 15, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-09.ps.Z %A Hexmoor, Henry H. %T A Methodology for Developing Competent Agents Without Sensor and Actuator Profusion %R 94-09 %D April 15, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-12.ps.Z %A Govindarajan, Kannan %A Jayaraman, Bharat %A Mantha, Surya %T Preference Logic Programming: Optimization as Inference %R 94-12 %D April 15, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-10.ps.Z %A Dabholkar, V.P. %A Chakravarty, S. %T Minimizing Power Dissipation in Combinational Circuits During Test Application %R 94-10 %D April 15, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-23.ps.Z %A Hexmoor, Henry %A Nute, Donald %T Methods for deciding what to do next and learning %R 92-23 %D September, 1992 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-15.ps.Z %A Cai, Jin-Yi %A Chari, Suresh %T On the Impossibility of Amplifying the Independence of Random Variables %R 94-15 %D May 04, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-14.ps.Z %A Cai, Jin-Yi %A Hirsch, Michael D. %T Rotation Distance, Triangulations of Planar Surfaces and Hyperbolic Geometry %R 94-14 %D May 04, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-16.ps.Z %A Cai, Jin-Yi %T Computing Jordan Normal Forms exactly for commuting matrices in polynomial time %R 94-16 %D May 11, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-17.ps.Z %A Cai, Jin-Yi %A Lipton, Richard J. %A Zalcstein, Yechezkel %T The complexity of the A B C problem resolved %R 94-17 %D May 12, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-19.ps.Z %A Green, Frederic %A Koebler, Johannes %A Regan, Kenneth W. %A Schwentick, Thomas %A Toran, Jacobo %T The Power of the Middle Bit of a #P Function %R 94-19 %D May 20, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-23.ps.Z %A Regan, Kenneth W. %T Linear-Time Algorithms in Memory Hierarchies %R 94-23 %D May 20, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-24.ps.Z %A Regan, Kenneth W. %T Linear Speed-Up, Information Vicinity, and Finite-State Machines %R 94-24 %D May 20, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-21.ps.Z %A Naik, Ashish V. %A Regan, Kenneth W. %A Sivakumar, D. %T Quasilinear Time Complexity Theory %R 94-21 %D May 20, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-18.ps.Z %A Regan, Kenneth W. %T Linear Time and Memory-Efficient Computation %R 94-18 %D May 20, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-20.ps.Z %A Li, Lide %A Ogihara, Mitsunori %A Regan, Kenneth W. %T On Information From #P Functions %R 94-20 %D May 20, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-22.ps.Z %A Cai, Jin-Yi %A Lipton, Richard J. %A Longpre, Luc %A Ogihara, Mitsunori %A Regan, Kenneth W. %A Sivakumar, D. %T Communication Complexity of Key Agreement on Limited Ranges %R 94-22 %D May 20, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-26.ps.Z %A Lammens, Johan M. %T A Computational Model of Color Perception and Color Naming %R 94-26 %D June 24, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-29.ps.Z %A Regan, Kenneth W. %T Index Sets and Presentations of Complexity Classes (revised version) %R 94-29 %D July 29, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-30.ps.Z %A Jana, Devashis %A Jayaraman, Bharat %T Set Constructors, Finite Sets, and Logical Semantics %R 94-30 %D August 08, 1994 %I Department of Computer Science, SUNY Buffalo %K set constructors, finite sets, set axioms, set unification, freeness, Herbrand models. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-43.ps.Z %A Revankar, Shriram %T Supervised Image Segmentation %R 93-43 %D May 1993 %I Department of Computer Science, SUNY Buffalo %Y J.3.0 Biology H.1.2 User/Machine I.4.6 Segmentation I.4.7 Feature Measurement %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-34.ps.Z %A Gong, Yiming %A Chakravarty, Sreejit %T A Diagnosis Algorithm for Bridging Faults in Combinational Circuits %R 94-34 %D September 13, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-38.ps.Z %A Cai, Jin-Yi %A Cai, Pu %A Zhu, Yixin %T A fully polynomial time approximation scheme in scheduling deteriorating jobs %R 94-38 %D October 02, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-28.ps.Z %A Rappaport, William J. %T Understanding Understanding: Syntactic Semantics and Computational Cognition %R 94-28 %D October 20, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-33.ps.Z %A Sivalingam, Krishna Moorty %T High-Speed Communication Protocols for All-Optical Wavelength Division Multiplexed Computer Networks %R 94-33 %D October 20, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-27.ps.Z %A Govindarajan, Kannan %A Jayaraman, Bharat %A Mantha, Surya %T Preference Logic Grammars %R 94-27 %D June 24, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-13.ps.Z %A Govindarajan, Kannan %A Jayaraman, Bharat %T Intensional AlgorithmicDebugging %R 94-13 %D May 20, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-25.ps.Z %A Shu, Wei %T Runtime Incremental Parallel Scheduling (RIPS) on Distributed Memory Computers %R 94-25 %D November 11, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-32.ps.Z %A Kumar, Amruth N. %T Component Ontological Representation of Function For Candidate Discrimination in Model Based Diagnosis %R 94-32 %D November 11, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-37.ps.Z %A Jana, Devashis %T Semantics of Subset-Logic Languages %R 94-37 %D November 11, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-39.ps.Z %A Curtis, Ronald Sanger %T Data Structure Complexity Metrics %R 94-39 %D November 11, 1994 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-27.ps.Z %A Miller, Russ %T The Status of Parallel Processing Education %R 93-27 %D July, 1993 (updated subsequently) %I Department of Computer Science, SUNY Buffalo %K Parallel Processing, Education %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-06.ps.Z %A He, Xin %T Grid Embedding of Internally Triangulated Plane Graphs without Non-empty Triangles %R 95-06 %D February 02, 1995 %I Department of Computer Science, SUNY Buffalo %K planar graphs, graph drawing %X A straight line grid embedding of a plane graph G is a drawing of G such that the vertices are drawn at grid points and the edges are drawn as non-intersecting straight line segments. In this paper, we show that, if an internally triangulated plane graph G has no non-empty triangles (a non-empty triangle is a triangle of G containing some vertices in its interior), then G can be embedded on a grid of size W x H such that W+H <= n, W <=(n+3)/2 and H <= 2(n-1)/3, where n is the number of vertices of G. Such an embedding can be computed in linear time. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/90-23.txt.Z %A He, Xin %T An Improved Algorithm for the Planar 3-Cut Problem %R 90-23 %D February, 1990 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/90-26.txt.Z %A Sanath, R. %A He, X. %T Efficient Parallel Algorithms for Selection and Searching on Sorted Matrices %R 90-26 %D February, 1990 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %A Chang, C.-S. %A DeTitta, G.T. %A Hauptman, H.A. %A Miller, R. %A Thuman, P. %A Weeks, C.M. %T Using Parallel Computers to Solve the Phase Problem of X-Ray Crystallography %R 92-15 %D June 1992 %I Department of Computer Science, SUNY Buffalo %K X-Ray Crystallography, Parallel Computing %X not available, final version appears in The International Journal of Supercomputer Applications, vol 7, no. 1, pp. 25-49 %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-01.ps.Z %A Zhang, Aidong %A Nodine, Marian %A Bhargava, Bharat %T Ensuring Semi-Atomicity in Heterogeneous Distributed Database Systems %R 95-01 %D February 04, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-03.ps.Z %A Jagota, Arun K. %A Narasimhan, Giri %A Regan, Kenneth W. %T Information Capacity of Binary Weights Associative Memories %R 95-03 %D January 24, 1995 %I Department of Computer Science, SUNY Buffalo %X We study the amount of information stored in the fixed points of randominstances of two binary weights associative memory models: the Willshaw Model (WM) and the Inverted Neural Network (INN). For these models, we show divergences between the information capacity (IC) as defined by Abu-Mostafa and Jacques, and information calculated from the standard notion of storage capacity by Palm and Grossman, respectively. We prove that the WM has asymptotically optimal IC for nearly the full range of threshold values, the INN likewise for constant threshold values, and both over all degrees of sparseness of the stored vectors. This is contrasted with the result by Palm, which required stored random vectors to be logarithmically sparse to achieve good storage capacity for the WM, and with that of Grossman, which showed that the INN has poor storage capacity for random vectors. We propose Q-state versions of the WM and the INN, and show that they retain asymptotically optimal IC while guaranteeing stable storage. By contrast, we show that the Q-state INN has poor storage capacity for random vectors. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/90-13.ps.Z %A Rapaport, William J. %T Computer Processes and Virtual Persons: Comments on Cole's "Artificial Intelligence and Personal Identity" %R 90-13 %D May 1990 %I Department of Computer Science, SUNY Buffalo %K computer processes, virtual persons, John Searle, Chinese-Room argument, artificial intelligence, cognitive science %Y I.2 ARTIFICIAL INTELLIGENCE I.2.0 General-Cognitive simulation I.2.0 General-Philosophical foundations I.2.1 Applications and Expert Systems (see also H.4,J)-Natural language interfaces I.2.4 Knowledge Representation Formalisms and Methods-Semantic networks I.2.7 Natural Language Processing I.2.7 Natural Language Processing-Language parsing and understanding %X This is a draft of the written version of comments on a paper by David Cole, presented orally at the American Philosophical Association Central Division meeting in New Orleans, 27 April 1990. Following the written comments are 2 appendices: One contains a letter to Cole updating these comments. The other is the handout from the oral presentation. In general, I am sympathetic to Cole's arguments; my comments seek to clarify and extend the issues. Specifically, I argue that, in Searle's celebrated Chinese Room Argument, Searle-in-the-room does understand Chinese, in spite of his claims to the contrary. He does this in the sense that he is executing a computer ``process'' that can be said to understand Chinese. (The argument that the process in fact does understand Chinese is made elsewhere; here, I merely assume that *if* anything understands Chinese, it is a ``process'' executed by Searle-in-the-room.) I also show, by drawing an analogy between the way that I add numbers in my head and the way that a calculator adds numbers, that Searle-in-the-room's claim that he does not understand Chinese does not contradict the fact that, by executing the Chinese-natural-language-understanding algorithm, he does understand Chinese. %Z Tue, 19 Sep 1995 15:03:53 GMT %A Shapiro, Stuart C. %A Rapaport, William J. %T The SNePS Family %R 90-21 %D September 1990 %I Department of Computer Science, SUNY Buffalo %K SNePS, semantic networks %Y I.2 ARTIFICIAL INTELLIGENCE I.2.0 General-Cognitive simulation I.2.0 General-Philosophical foundations I.2.1 Applications and Expert Systems (see also H.4,J)-Natural language interfaces I.2.3 Deduction and Theorem Proving-Deduction (e.g., natural, rule-based) I.2.3 Deduction and Theorem Proving-Nonmonotonic reasoning and belief revision I.2.4 Knowledge Representation Formalisms and Methods I.2.4 Knowledge Representation Formalisms and Methods-Representation languages I.2.4 Knowledge Representation Formalisms and Methods-Semantic networks %X SNePS, the Semantic Network Processing System, has been designed to be a system for representing the beliefs of a natural language using intelligent system (a ``cognitive agent''). It has always been the intention that a SNePS-based ``knowledge base'' would ultimately be built, not by a programmer or knowledge engineer entering representations of knowledge in some formal language or data entry system, but by a human informing it using a natural language (generally supposed to be English), or by the system reading books or articles that had been prepared for human readers. This document describes the history of SNePS and some recent SNePS projects. It has been superseded by: Shapiro, Stuart C., & Rapaport, William J. (1992), ``The SNePS Family,'' _Computers and Mathematics with Applications_, Vol. 23: 243-275. Reprinted in Fritz Lehmann (ed.), _Semantic Networks in Artificial Intelligence_ (Oxford: Pergamon Press, 1992): 243-275. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-02.ps.Z %A Regan, Kenneth W. %A Sivakumar, D. %A Cai, Jin-Yi %T Pseudorandom Generators, Measure Theory, and Natural Proofs %R 95-02 %D January 25, 1995 %I Department of Computer Science, SUNY Buffalo %X This paper proves that if strong pseudorandom number generators or one-way functions exist, then the class of languages that have polynomial-sized circuits is not small within exponential time, in terms of the resource-bounded measure theory of Lutz. More precisely, if for some e > 0 there exist nonuniformly 2^{n^e}}-hard PSRGs, as is widely believed, then the complexity class P/poly does not have measure zero in EXP. Our results establish connections between the measure theory and the "natural proofs" of Razborov and Rudich. Our work is also motivated by Lutz's hypothesis that NP does not have measure zero in EXP; obtaining our results with NP in place of P/poly would show much more far-reaching consequences from the existence of PSRGs than are currently known. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-08.ps.Z %A Regan, Kenneth W. %A Sivakumar, D. %T Improved Resource-Bounded Borel-Cantelli and Stochasticity Theorems %R 95-08 %D February 16, 1995 %I Department of Computer Science, SUNY Buffalo %K Resource-bounded measure, Borel-Cantelli Lemma, martingales, stochasticity %X This note strengthens and simplifies Lutz's resource-bounded version of the Borel-Cantelli lemma for density systems and martingales. We observe that the technique can be used to construct martingales that are ``additively honest,'' and also martingales that are ``multiplicatively honest.'' We use this to improve the ``Weak Stochasticity Theorem'' of Lutz and Mayordomo: their result does not address the issue of how rapidly the bias away from 1/2 converges toward zero in a ``stochastic'' language, while we show that the bias must vanish exponentially. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-07.ps.Z %A Dabholkar, Vinay %A Chakravarty, Sreejit %A Najm, Farid %A Patel, Janak %T Cyclic Stress Tests for Full Scan Circuits %R 95-07 %D February 24, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-10.ps.Z %A Zaionc, Marek %T Lambda Representation of Operations Between Different Term Algebras %R 95-10 %D February 28, 1995 %I Department of Computer Science, SUNY Buffalo %K lambda definability %Y F.4.1 Mathematical Logic (see also F.1.1,I.2.2-3)-Lambda calculus and related systems F.4.1 Mathematical Logic (see also F.1.1,I.2.2-3)-Recursive function theory %X There is a natural isomorphism identifying second order types of the simple typed $\lambda$ calculus with free homogeneous term algebras. Let $\tau^A$ and $\tau^B$ be types representing algebras $A$ and $B$ respectively. Any closed term of the type $\tau^A \to \tau^B$ represents a computable function between algebras $A$ and $B$. The problem investigated in the paper is to find and characterize the set of all $\lambda$ definable functions between structures $A$ and $B$. The problem is presented in a more general setting. If algebras $ A_1 ,...,A_n ,B$ are represented respectively by second order types $\tau^{A_1} ,...,\tau^{A_n},\tau^B$ then $\tau^{A_1} \to (...( \tau^{A_n} \to \tau^B )...)$ is a type of functions from the product $A_1 \times ... \times A_n$ into algebra $B$. Any closed term of this type is a representation of algorithm which transforms the tuple of terms of types $\tau^{A_1} ,...,\tau^{A_n}$ respectively into a term of type $\tau^B$, which represents an object in algebra $ B$ (see [B\"{o}B85]). The problem investigated in the paper is to find an effective computational characteristic of the $\lambda$ definable functions between arbitrary free algebras and the expressiveness of such transformations. As an example we will consider $\lambda$ definability between well known free structures such as: numbers, words and trees. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-11.dvi.Z %A Chakravarty, Sreejit %A Fuchs, Kent %A Patel, Janak %T Evaluation and Generation of IDDQ Diagnostic Tests for Bridging Faults in Combinational Circuits %R 95-11 %D February 27, 1995 %I Department of Computer Science, SUNY Buffalo %X Diagnosis is the process of identifying the fault in a faulty chip. We assume that IDDQ measurement is used during diagnosis. A novel algorithm for evaluating the effectiveness of a set of input vectors to diagnose bridging faults in combinational circuits is presented. We also present a simulation based test generator for computing IDDQ tests for diagnosing bridging faults. Experimental evaluation of the algorithms are presented. The results obtained are very encouraging. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-15.ps.Z %A Ehrlich, Karen %A Rapaport, William J. %T A Computational Theory of Vocabulary Expansion: Project Proposal %R 95-15 %D March 21, 1995 %I Department of Computer Science, SUNY Buffalo %K artificial intelligence, computational linguistics, natural-language understanding lexical acquisition, semantics, semantic networks, machine learning %Y I.2 ARTIFICIAL INTELLIGENCE I.2.0 General-Cognitive simulation I.2.1 Applications and Expert Systems (see also H.4,J)-Natural language interfaces I.2.3 Deduction and Theorem Proving-Nonmonotonic reasoning and belief revision I.2.4 Knowledge Representation Formalisms and Methods-Semantic networks I.2.6 Learning (see also K.3.2)-Concept learning I.2.7 Natural Language Processing-Language parsing and understanding %X This project concerns the development and implementation of a computational theory of how human readers and other natural-language-understanding systems can automatically expand their vocabulary by determining the meaning of a word from context. The word might be unknown to the reader, familiar but misunderstood, or familiar but being used in a new sense. `Context' includes the prior and immediately surrounding text, grammatical information, and the reader's background knowledge, but no access to a dictionary or other external source of information (including a human). The fundamental thesis is that the meaning of such a word (1) *can* be determined from context, (2) can be *revised* and refined upon further encounters with the word, (3) ``*converges*'' to a dictionary-like definition if enough context has been provided and there have been enough exposures to the word, and (4) eventually ``*settles down*'' to a ``steady state'', which, however, is always subject to revision upon further encounters with the word. The system is being implemented in the SNePS-2.1 knowledge-representation and reasoning system, which provides a software laboratory for testing and experimenting with the theory. This research is a component of an interdisciplinary, cognitive-science project to develop a computational cognitive model of a reader of narrative text. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-14.ps.Z %A Jayaraman, B. %A Moon, K. %T Implementation of Subset Logic Programs %R 95-14 %D March 24, 1995 %I Department of Computer Science, SUNY Buffalo %K subset and relational clauses, sets, set matching and unification, memo tables, monotonic aggregation, lazy evaluation, abstract machine, instruction set, performance analysis %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-16.ps.Z %A Cai, Jin-Yi %A Selman, Alan L. %T Average Time Complexity Classes %R 95-16 %D March 31, 1995 %I Department of Computer Science, SUNY Buffalo %K computational complexity, average time complexity classes, hierarchy, Average-P, logarithmico-exponentia %Y F.1.3 Complexity Classes (see also F.2)-Complexity hierarchies %X We extend Levin's theory of average polynomial time to arbitrary time bounds and prove that average time complexity classes form as fine a hierarchy as do deterministic time complexity classes. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-17.ps.Z %A Cai, Pu %A Cai, Jin-Yi %A Naik, Ashish %T Efficient algorithms for a scheduling problem and its applications to illicit drug market crackdowns %R 95-17 %D April 03, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-11.ps.Z %A Hill, Robin K. %T Issues of Semantics in a Semantic-Network representation of Belief %R 94-11 %D April 03, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-18.ps.Z %A Gong, Yiming %A Chakravarty, Sreejit %T A Dynamic Diagnostic Test Generation System for IDDQ Measurement Based Diagnosis of Bridging Faults %R 95-18 %D April 10, 1995 %I Department of Computer Science, SUNY Buffalo %K VLSI, Testing, Diagnosis, Test Generation, IDDQ, Bridging Fault %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-20.ps.Z %A Zaionc, Marek %T Fixpoint Technique for Counting Terms in Typed Lambda Calculus %R 95-20 %D April 14, 1995 %I Department of Computer Science, SUNY Buffalo %K typed lambda calculus, Curry - Howard isomorphism %Y F.4.1 Mathematical Logic (see also F.1.1,I.2.2-3)-Lambda calculus and related systems %X Typed lambda calculus with denumerable set of ground types is considered. The aim of the paper is to show procedure for counting closed terms in long normal forms. It is shown that there is a surprising correspondence between the number of closed terms and the fixpoint solution of the polynomial equation in some complete lattice. It is proved that counting of terms in typed lambda calculus can be reduced to the problem of finding least fixpoint for the system of polynomial equations. The algorithm for finding the least fixpoint of the system of polynomials is considered. By the well known Curry Howard isomorphism the result can be interpreted as a method for counting proofs in the implicational fragment of the propositional intuitionistic logic. The problem of number of terms was studied but never published by Ben - Yelles and Roger Hindlay. Similarly Hirokawa proved that complexity of the question whether given type ( formula ) possess an infinite number of normal terms (proofs) is polynomial space complete. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-22.ps.Z %A Govindarajan, Kannan %A Jayaraman, Bharat %A Mantha, Surya %T Relaxation in Constraint Logic Languages %R 95-22 %D April 25, 1995 %I Department of Computer Science, SUNY Buffalo %K constraints, preferences, constraint hierarchies, logic programming, optimization, relaxation, model-theoretic and operational semantics, modal logic, soundness and completeness %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-21.ps.Z %A Chopra, Rajiv %A Srihari, Rohini %A Ralston, Anthony %T HyperArc Consistency and Expensive Constraints %R 95-21 %D May 05, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-24.ps.Z %A Zaionc, Marek %T Lambda definability is decidable for second order types and for regular third order types %R 95-24 %D May 08, 1995 %I Department of Computer Science, SUNY Buffalo %K lambda definability %Y F.4.1 Mathematical Logic (see also F.1.1,I.2.2-3)-Lambda calculus and related systems %X It has been proved by Loader that Statman-Plotkin conjecture fails. It means that it is undecidable to determine whether or not the given function in the full type hierarchy is lambda definable. The Loader proof was done by encoding the word problem in the full type hierarchy based on the domain with 7 elements. The aim of this paper is to show that the lambda definability problem limited for second order types and regular third order types is decidable in any finite domain. Obviously lambda definability is decidable for 0 and 1 order types. As an additional effect of the result described we may observe that for certain types there is no finite grammar generating all closed terms. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-16.dvi.Z %A Chakravarty, Sreejit %A Thadikaran, Paul J. %T Generation and Simulation of IDDQ Tests for Bridging and Leakage Faults in Combinational Circuits %R 92-16 %D June 1992 %I Department of Computer Science, SUNY Buffalo %X In the absence of information about the layout and for better defect coverage test generation and fault simulation systems must target all bridging faults. The usefulness of targeting a subset of such faults, viz. all two line bridging faults, is based on the fact that an IDDQ Test Set that detects all two line bridging faults also detects all multiple line, single cluster bridging faults. A novel algorithm for simulating IDDQ Tests for all two-line bridging faults in combinational circuits is presented. The algorithm uses an implicit representation of the fault list thereby resulting in a time and space efficient algorithm. We show that the problem of computing IDDQ Tests for a two line bridging fault as well as for a leakage fault, in some restricted classes of circuits, is intractable. Next, experimental results on using randomly generated IDDQ Test Sets for detecting bridging faults are presented. These results point to the computational feasibility of targeting all two line bridging fault in combinational circuits. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-25.ps.Z %A Koepsell, David R. %A Rapaport, William J. %T The Ontology of Cyberspace: Questions and Comments %R 95-25 %D April 22, 1995 %I Department of Computer Science, SUNY Buffalo %K ontology, cyberspace, philosophical issues, copyright, patent, legal issues %Y D.2.9 Management-Copyrights I.2.0 General-Philosophical foundations K.5 LEGAL ASPECTS OF COMPUTING K.5.1 Software Protection-Copyrights K.5.1 Software Protection-Patents K.5.1 Software Protection-Trade secrets K.5.m Miscellaneous-Hardware patents %X This document consists of two papers: "The Ontology of Cyberspace: Preliminary Questions", by David R. Koepsell, and "Comments on `The Ontology of Cyberspace'," by William J. Rapaport. They were originally presented at the Tri-State Philosophical Association Meeting, St. Bonaventure University, 22 April 1995. This document is SUNY Buffalo Department of Computer Science Technical Report 95-25 and SUNY Buffalo Center for Cognitive Science Technical Report 95-09. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-41.ps.Z %A Cai, Jin-Yi %A Fuchs, W.H.J. %A Kozen, Dexter %A Liu, Zicheng %T Efficient Average-Case Algorithms for the Modular Group %R 93-41 %D June 15, 1995 (modified version) %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-27.ps.Z %A Cai, Jin-Yi %A Liu, Zicheng %T The bounded membership problem of the monoid SL_2(N) %R 95-27 %D June 15, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-29.ps.Z %A Babai, L. %A Beals, R. %A Cai, J-Y. %A Ivanyos, G. %A Luks, E. %T Multiplicative equations over commutative matrices %R 95-29 %D July 13, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-30.ps.Z %A Cai, Jin-Yi %A Sivakumar, D. %T The Resolution of a Hartmanis Conjecture %R 95-30 %D July 13, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-31.ps.Z %A Cai, Jin-Yi %A Naik, Ashish V. %A Sivakumar, D. %T On the Existence of Hard Sparse Sets under Weak Reductions %R 95-31 %D July 13, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-23.ps.Z %A Milun, Davin %T Generating Markov Random Field Image Analysis Systems from Examples %R 95-23 %D May 1, 1995 %I Department of Computer Science, SUNY Buffalo %K MRF ; pattern recognition ; edge detection ; low level computer vision %Y I.5.2 Design Methodology I.2.6 Learning (see also K.3.2)-Parameter learning %X This thesis develops image interpretation systems from training sets of images. The system learns Markov random field parameters by sampling from this training set. Specialized MRFs can model both the image and the noise processes. The method has been tested in the domains of binary image reconstruction, edge detector enhancement and edge detection. Capturing the noise process within the MRF parameters is achieved by sampling neighborhoods in the original images together with the corresponding neighborhoods in the noisy images, and so creating a probability density function over pairs of neighborhoods across both images. This technique is that of Double Neighborhood MRF (DNMRF). This technique can even capture the noise model in situations where the noise is correlated. The task that the DNMRF method performs, and the domain in which it operates, can be changed by simply providing an appropriate set of training images. The collection of domains in which this system will work are those problems where the task to be performed is localized in nature, has a binary desired result, and has an available set of training images. The required size of the training set is studied. For standard MRFs, only 16 images of size 64x64 are required to be sampled. The performance of DNMRFs continue to improve even as the sample size increases about 256 images. The method uses optimization to interpret images. The optimizers used are the ICM gradient descent method, and simulated annealing. The sparseness of sampled MRF distributions is ameliorated by a technique developed by Hancock and Kittler. Of note are the domains of edge detection and edge detector enhancement of JPEG compressed images. In the case of edge detection the DNMRF method, when operating on binary images, performed as well as, or even better than, other well known edge detectors, such as Sobel and Canny. In the case of edge enhancement of JPEG compressed images, the method greatly improved the output of the edge detector. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-35.ps.Z %A Niyogi, Debashish %T A Knowledge-Based Approach to Deriving Logical Structure from Document Images %R 94-35 %D August 1994 %I Department of Computer Science, SUNY Buffalo %K Document Understanding ; Image Analysis ; Artificial Intelligence ; Pattern Recognition Layout Analysis ; Digital Libraries ; Logic Programming ; Knowledge-Based Systems %X An important application of artificial intelligence is document image understanding, specifically the analysis of document images to derive a symbolic description of the document structure and contents. This requires the segmentation of the different blocks of printed matter using standard image processing techniques, and the use of spatial domain knowledge to first classify these blocks (e.g., text paragraphs, photographs, etc.), then group these blocks into logical units (e.g., newspaper stories, magazine articles, etc.), and finally determine the reading order of the text blocks within each logical unit. The above steps describe the problem of converting the physical structure of the document into its logical structure with the use of domain knowledge about document layout. The objective of this work is to develop a computational model for the derivation of the logical structure of documents using certain formalisms designed for this task. In this model, a simple yet powerful rule-based control strategy utilizes the data obtained from the invocation of different types of operations on a digitized document image, and makes inferences about the document using a knowledge base of document layout rules. The domain knowledge about document structure is represented in a unified multi-level hierarchical form, and is used by reasoning processes to make inferences. The main issues investigated in this research are: the kinds of domain and control knowledge that are required, how to represent this knowledge in a globally integrated form, and how to devise a control strategy that efficiently utilizes this knowledge. A knowledge-based document logical structure derivation system (DeLoS) has been developed based on this model. The system consists of a hierarchical rule-based control system that guides the block classification, block grouping and read-ordering operations; a global data structure that stores the document image data and incremental inferences; and a domain knowledge base that encodes the rules governing document layout. Applications of this approach include use in digital libraries for the retrieval of relevant logical document information, as well as in comprehensive document understanding systems that can read document text and allow interactive querying of the syntactic and semantic information in a document. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-12.ps.Z %A Zhang, Aidong %A Cheng, Biao %A Acharya, Raj %T Texture-Based Image Retrival Using Fractals %R 95-12 %D July 21, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-35.ps.Z %A Cai, Jin-Yi %T A simple improvement of a theorem of Polya %R 95-35 %D August 10, 1995 %I Department of Computer Science, SUNY Buffalo %K quadratic residues %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-36.ps.Z %A Cai, Jin-Yi %T Frobenius's degree formula and Toda's polynomials %R 95-36 %D August 10, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-34.ps.Z %A Wu, Min-You %T On Runtime Parallel Scheduling %R 95-34 %D August 10, 1995 %I Department of Computer Science, SUNY Buffalo %X Parallel scheduling is a new approach for load balancing. In parallel scheduling, all processors cooperate together to schedule work. Parallel scheduling is able to accurately balance the load by using global load information at compile-time or runtime. It provides a high-quality load balancing. This paper presents an overview of the parallel scheduling technique. Particular scheduling algorithms for tree, hypercube, and mesh networks are presented. These algorithms can fully balance the load and maximize locality at runtime. Communication costs are significantly reduced compared to other existing algorithms. %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-31.ps.Z %A Zhang, Aidong %T Impact of Multimedia Data on Workflow %R 94-31 %D August 16, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-43.ps.Z %A Zhang, Aidong %A Nodine, Marian %A Bhargava, Bharat %T Ensuring Semi-Atomicity in Heterogeneous Distributed Database Systems %R 94-43 %D August 16, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-05.ps.Z %A Zhang, Aidong %A Cheng, Biao %A Acharya, Raj %T Using Fractal Coding to Index Image Content for a Digital Library %R 95-05 %D August 16, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-19.ps.Z %A Zhang, Aidong %A Cheng, Biao %A Acharya, Raj %T Texture-Based Image Retrieval Using Fractal Codes %R 95-19 %D August 16, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-33.ps.Z %A Zhang, Aidong %T On Synchronized Presentation Management in Multimedia Database Systems %R 95-33 %D August 16, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-39.ps.Z %A Regan, K.W. %A Vollmer, H. %T Gap Languages and Log-Time Complexity Classes %R 95-39 %D September 12, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-40.ps.Z %A Cai, Jin-Yi %A Sivakumar, D. %T Resolution of Hartmanis' Conjecture for NL-hard sparse sets %R 95-40 %D September 18, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-41.ps.Z %A Cai, Jin-Yi %A Ogihara, Mitsunori %T Sparse Sets versus Complexity Classes %R 95-41 %D September 18, 1995 %I Department of Computer Science, SUNY Buffalo %Z Tue, 19 Sep 1995 15:03:53 GMT %U http://www.cs.buffalo.edu/pub/tech-reports/95-43.ps.Z %A Ravikumar, S. %A Sivakumar, D. %T On Self-Testing without the Generator Bottleneck %R 95-43 %D September 20, 1995 %I Department of Computer Science, SUNY Buffalo %X Suppose P is a program designed to compute a linear function f on a group G. The task of "self-testing" f, that is, testing if P computes f correctly on most inputs, usually involves checking if P computes f correctly on all the generators of G. Recently, F. Ergun presented self-testers that avoid this "generator bottleneck" for specific functions. In this paper, we generalize Ergun's results, and extend them to a much larger class of functions Our results give efficient self-testers for polynomial differentiation, integration, arithmetic in large finite field extensions, and constant-degree polynomials over large rings. %Z Thu, 21 Sep 1995 17:10:00 GMT %U http://www.cs.buffalo.edu/pub/tech-reports/95-42.ps.Z %A Cai, Jin-Yi %A Naik, Ashish V. %A Sivakumar, D. %T Bounded Truth Table Reductions of P %R 95-42 %D Sept. 21, 1994 %I Department of Computer Science, SUNY Buffalo %Z Fri, 22 Sep 1995 03:00:12 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-38.ps.Z %A Seni, Giovanni %T Large Vocabulary Recognition of On-line Handwritten Cursive Words %R 95-38 %D August 1, 1995 %I Department of Computer Science, SUNY Buffalo %K Handwriting recognition, Pen computing, Neural network applications %X A critical feature of any computer system is its interface with the user. This has led to the development of user interface technologies such as mouse, touchscreen and pen-based input devices. Since handwriting is one of the most familiar communication media, pen-based interfaces combined with automatic handwriting recognition offers a very easy and natural input method. Pen-based interfaces are also essential in mobile computing because they are scalable. Recent advances in pen-based hardware and wireless communication have been influential factors in the renewed interest in on-line recognition systems. \\ On-line handwriting recognition is fundamentally a pattern classification task; the objective is to take an input pattern, the handwritten signal collected on-line via a digitizing device, and classify it as one of a pre-specified set of words (i.e., the system's lexicon). Because exact recognition is very difficult, a lexicon is used to constrain the recognition output to a known vocabulary. Lexicon size affects recognition performance because the larger the lexicon, the larger the number of words that can be confused. Most of the research efforts in this area have been devoted to the recognition of isolated characters, or run-on hand-printed words. A smaller number of recognition systems have been devised for cursive words, a difficult task due to the presence of the letter segmentation problem (partitioning the word into letters), and large variation at the letter level. Most existing systems restrict the working dictionary sizes to less than a few thousand words. \\ This research focused on the problem of cursive word recognition. In particular, I investigated the issues of how to efficiently deal with large lexicon sizes, the role of dynamic information over traditional feature-analysis models in the recognition process, the incorporation of letter context and avoidance of error-prone segmentation of the script by means of an integrated segmentation and recognition approach, and the use of domain information in the postprocessing stage. These ideas were used to good effect in a recognition system that I developed; this system, operating on a 21,000-word lexicon , was able to correctly recognize 88.1% (top-10) and 98.6% (top-10) of the writer-independent and writer-dependent test set words respectively. %Z Mon, 16 Oct 1995 16:25:34 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-45.ps.Z %A Slavik, Petr %T Improved Performance of the Greedy Algorithm for the Minimum Set Cover and Minimum Partial Cover Problems %R 95-45 %D October 15, 1995 %I Department of Computer Science, SUNY Buffalo %K Approximation Algorithms, Combinatorial Optimization, Greedy Algorithm. %X We establish significantly improved bounds on the performance of the greedy algorithm for approximating MINIMUM SET COVER and MINIMUM PARTIAL COVER. Our improvements result from a new approach to both problems. In particular, (a) we improve the known bound on the performance ratio of the greedy algorithm for general covers without cost, showing that it differs from the classical harmonic bound by a function which approaches infinity (as the size of the base set increases); (b) we show that, for covers without cost, the performance guarantee for the greedy algorithm is significantly better than the performance guarantee for the randomized rounding algorithm; (c) we prove that the classical bounds on the performance of the greedy algorithm for complete covers with costs are valid for partial covers as well, thus lowering, by more than a factor of two, the previously known estimate. %Z Wed, 18 Oct 1995 13:48:31 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-46.ps.Z %A Wu, Min-You %T Parallel Incremental Scheduling %R 95-46 %D October 24, 1995 %I Department of Computer Science, SUNY Buffalo %K Task-parallel model, load balancing, incremental scheduling, parallel scheduling %X Parallel incremental scheduling is a new approach for load balancing. In parallel scheduling, all processors cooperate together to balance the workload. Parallel scheduling accurately balances the load by using global load information. In incremental scheduling, the system scheduling activity alternates with the underlying computation work. This paper provides an overview of parallel incremental scheduling. Different categories of parallel incremental scheduling are discussed. This paper will appear in to appear in "Parallel Processing Letters." %Z Wed, 25 Oct 1995 18:33:25 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-47.ps.Z %A Wu, Min-You %T Symmetrical Hopping: a Scalable Scheduling Algorithm for Irregular Problems %R 95-47 %D October 25, 1995 %I Department of Computer Science, SUNY Buffalo %K Scheduling, distributed memory machines, irregular problems, scalability %X A runtime support is necessary for parallel computations with irregular and dynamic structures. One important component in the support system is the runtime scheduler which balances the working load in the system. We present a new algorithm, Symmetrical Hopping, for dynamic scheduling of ultra-lightweight processes. It is a dynamic, distributed, adaptive, and scalable scheduling algorithm. This algorithm is described and compared to four other algorithms that have been proposed in this context, namely the random allocation, the sender-initiated scheduling, the receiver-initiated scheduling, and the gradient model. The performance of these algorithms on Intel Touchstone Delta is presented. The experimental results show that the Symmetrical Hopping algorithm achieves much better performance due to its adaptiveness. This paper appears in "Concurrency: Practice and Experience," vol. 7, no. 7, pp. 689-706, Oct. 1995. %Z Wed, 25 Oct 1995 18:34:09 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-48.ps.Z %A Chakravarty, S. %T A Sampling Technique for Diagnostic Fault Simulation %R 95-48 %D October 26, 1995 %I Department of Computer Science, SUNY Buffalo %K Diagnostic Fault Simulation, Sampling, Approximation Algorithms %X The quality of diagnostic test sets (DTS) are determined using diagnostic fault simulation(DFS). Approximation algorithms, based on "sampling", for this computationally difficult problem are explored. "Fault Sampling", used very effectively for fault simulation, cannot be used for DFS. As an alternative we propose using "EC/IC Sampling" for DFS. It samples the set of equivalence classes(EC)/indistinguishible classes (IC). An approach to sample ECs/ICs implicitly, without explicitly enumerating the set of ECs/ICs is presented. Experimental evaluation of the proposed technique show it to be very effective. %Z Thu, 26 Oct 1995 19:11:53 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-50.ps.Z %A Govindarajan, Kannan %A Jayaraman, Bharat %A Mantha, Surya %T Preference Datalog %R 95-50 %D November 01, 1995 %I Department of Computer Science, SUNY Buffalo %K preference, datalog, optimization, relaxation, bottom-up evaluation, semantics, magic rewriting %Z Wed, 08 Nov 1995 20:11:45 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-49B.ps.Z %A Rapaport, William J. %A Shapiro, Stuart C. %A Wiebe, Janyce M. %T Quasi-Indexicals and Knowledge Reports %R 95-49B %D October 26, 1995 %I Department of Computer Science, SUNY Buffalo %K artificial intelligence, knowledge representation, computational linguistics, quasi-indexicals, belief representation %X We present a computational analysis of de re, de dicto, and de se belief and knowledge reports. Our analysis solves a problem first observed by Hector-Neri Castan}da, namely, that the simple rule `(A knows that P) implies P' apparently does not hold if P contains a quasi-indexical. We present a single rule, in the context of a knowledge-representation and reasoning system, that holds for all P, including those containing quasi-indexicals. In so doing, we explore the difference between reasoning in a public communication language and in a knowledge-representation language, we demonstrate the importance of representing proper names explicitly, and we provide support for the necessity of considering sentences in the context of extended discourse (for example, written narrative) in order to fully capture certain features of their semantics. %Z Wed, 08 Nov 1995 20:32:01 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-51.ps.Z %A Fenner, Stephen %A Green, Frederic %A Homer, Stephen %A Selman, Alan L. %A Thierauf, Thomas %A Vollmer, Heribert %T Complements of Multivalued Functions %R 95-51 %D November 06, 1995 %I Department of Computer Science, SUNY Buffalo %K conNPMV, multivalued functions, query hierarchy, difference hierarchy %X We study the class coNPMV of complements of NPMV functions. Though defined symmetrically to NPMV this class exhibits very different properties. We clarify the complexity of coNPMV by showing that, surprisingly, it is essentially the same as that of NPMV^{NP}. Complete functions for coNPMV are exhibited and central complexity-theoretic properties of this class are studied. We show that computing maximum satisfying assignments can be done in coNPMV, which leads us to a comparison of NPMV and coNPMV with Krentel's classes MaxP and MinP. The difference hierarchy for NPMV is related to the query hierarchy for coNPMV. Finally, we examine a functional analogue of Chang and Kadin's relationship between a collapse of the Boolean hierarchy over NP and a collapse of the polynomial time hierarchy. %Z Wed, 08 Nov 1995 20:33:40 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-49A.ps.Z %A Wu, Min-You %T On Parallelization of Static Scheduling Algorithms %R 95-49A %D October 31, 1995 %I Department of Computer Science, SUNY Buffalo %K Static scheduling, parallelization, scheduling quality, complexity, speedup %X Most static scheduling algorithms that schedule parallel programs represented by directed acyclic graphs (DAGs) are sequential. This paper discusses the essential issues of parallelization of static scheduling and presents two efficient parallel scheduling algorithms. The proposed algorithms have been implemented on an Intel Paragon machine, and their performance has been evaluated. These algorithms produce high-quality scheduling and are much faster than existing sequential and parallel algorithms. %Z Wed, 08 Nov 1995 20:36:17 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-37.ps.Z %A Govindarajan, Kannan %A Jayaraman, Bharat %A Mantha, Surya %T Optimization and Relaxation in Constraint Logic Languages %R 95-37 %D October 21, 1995 %I Department of Computer Science, SUNY Buffalo %K constraints, logic programming, optimization, relaxation, preference, formal semantics, modal logic %Z Wed, 08 Nov 1995 20:50:14 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-52.ps.Z %A Dabholkar, Vinay P. %A Chakravarty, Sreejit %T Stress Tests for Dynamic Burn-in of Full Scan Circuits %R 95-52 %D November 09, 1995 %I Department of Computer Science, SUNY Buffalo %X Dynamic burn-in is an important process that is used to improve the reliability of circuits. In this paper, we investigate the problem of computing cyclic sequences which can be used during burn-in of full scan circuits. We show that the problems are computationally difficult. In addition, we demonstrate experimentally that good heuristics can be developed to compute stress test for dynamic burn-in of full scan circuits. %Z Thu, 09 Nov 1995 20:37:09 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-44.ps.Z %A Zhang, Aidong %A Gollapudi, Sreenivas %T Multimedia Transaction Management in Database Systems %R 95-44 %D October 30, 1995 %I Department of Computer Science, SUNY Buffalo %X Current database management system techniques are insufficient to support the management of multimedia data owing to their time-sampled nature. The extension of database systems to support multimedia applications thus requires new mechanisms to ensure the synchronized presentation of multimedia data streams. In order to flexibly and efficiently present multimedia data streams to users, media streams must be segmented into media objects with time constraints among these objects specified and maintained. In this paper, we investigate a framework and systematic strategies for supporting the continuous and synchronized retrieval and presentation of multimedia data streams in a multimedia database system. Specifically, we will develop: (1) a practical framework for specifying multimedia transactions and schedules and the appropriate synchronization constraints between different media streams; (2) multimedia transaction scheduling principles underlying the synchronized presentation of multimedia data streams when delay effects are considered; and (3) pragmatic techniques for multimedia transaction scheduling to support the synchronized presentation of multimedia data streams in object-oriented database systems. %Z Wed, 15 Nov 1995 19:48:46 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-54.ps.Z %A Slavik, Petr %T A Tight Analysis of the Greedy Algorithm for Set Cover %R 95-54 %D November 19, 1995 %I Department of Computer Science, SUNY Buffalo %K Approximation Algorithms, Combinatorial Optimization, Fractional Set Cover, Greedy Algorithm, Partial Set Cover, Set Cover. %X We establish significantly improved bounds on the performance of the greedy algorithm for approximating set cover. In particular, we provide the first substantial improvement of the 20 year old classical harmonic upper bound, H(m), of Johnson, Lovasz, and Chvatal, by showing that the performance ratio of the greedy algorithm is, in fact, exactly ln m - ln ln m + Theta(1), where m is the size of the ground set. The difference between the upper and lower bounds turns out to be less than 1.1. This provides the first tight analysis of the greedy algorithm, as well as the first upper bound that lies below H(m) by a function going to infinity with m. We also show that the approximation guarantee for the greedy algorithm is better than the guarantee recently established by Srinivasan for the randomized rounding technique, thus improving the bounds on the integrality gap. Our improvements result from a new approach which might be generally useful for attacking other similar problems. %Z Tue, 21 Nov 1995 16:41:51 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-55.ps.Z %A Gollapudi, Sreenivas %A Zhang, Aidong %T Buffer Management in Multimedia Database Systems %R 95-55 %D November 28, 1995 %I Department of Computer Science, SUNY Buffalo %X The delivery of continuous and synchronous multimedia data from a database server to multiple destinations over a network presents new challenges in the area of buffer management. Many factors that were not considered in conventional buffer management must be examined. In this paper, we investigate the principles of buffer management for multimedia data presentations in object-oriented database environments. The primary goal is to minimize the response time of multimedia presentations while ensuring that all continuity and synchronization requirements are satisfied. Minimum buffering requirements to guarantee the continuity and synchrony ofi the presentation of multimedia data are first proposed. A prefetching technique which satisfies these requirements is then developed. These principles and technique provide users with the full range of information required to develop a database environment for multimedia presentations. %Z Tue, 28 Nov 1995 21:23:37 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-02.ps.Z %A Kumar, Ravi S. %A Sivakumar, D. %T Efficient Self-Testing of Linear Recurrences %R 96-02 %D January 29, 1996 %I Department of Computer Science, SUNY Buffalo %K self-testing, recurrences, polynomials %X We consider the problem of designing efficient self-testers for linear recurrences, and present a complete package of self-testers for this class of functions. The results are proved by demonstrating an efficient reduction from this problem to the problem of testing linear functions over certain matrix groups. Our tools include spectral analysis of matrices over finite fields, and various counting arguments that extend known techniques. The matrix twist yields completely new self-testers for polynomials over the finite field Z/p. The efficiency of our polynomial self-testers is better than all previously known testers, and in the univariate case, we are able to match the efficiency of the Blum-Luby-Rubinfeld linearity tester. We also present self-testers for convolution identities over groups, and improved self-testers for polynomials over rational domains. %Z Mon, 19 Feb 1996 16:39:52 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-03.ps.Z %A Chakravarty, Sreejit %A Thadikaran, Paul J. %T Which Set of Bridging Faults Should Test Compilers Target? %R 96-03 %D February 28, 1996 %I Department of Computer Science, SUNY Buffalo %K Bridging Faults, Fault Simulation, IDDQ Testing, Test Compilation, Test Compression,Test Generation %Y B.1.3; B.2.3; B.3.4; B.5.3; B.6.2; B.6.3; B.7.2 %X Existing test compilers target stuck-at faults. Recent experimental evidence suggest targetting bridging faults(BFs) because they model between 40-50% of physical defects. For BFs, one suggestion has been to extract a small set of BFs using "layout analysis". We argue that this is not a feasible option for test compilers thereby making a case for "layout independent analysis" of BFs. Opting for "layout independent analysis" implies that computer-aided test tools target "all possible BFs". Targetting such a large number of faults is generally considered unreasonable. We present new experimental data and analyze published data. The analysis reveal that for many problems very good techniques have already been developed to target such a large number of faults. %Z Mon, 04 Mar 1996 19:10:46 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-01.ps.Z %A Ivanyos, Gábor %T Testing membership in unitriangular matrix groups. Preliminary draft %R 96-01 %D January 02, 1996 %I Department of Computer Science, SUNY Buffalo %X We present an algorithm that, given a subgroup G of the group of the upper triangular matrices with rational entries by a set of generators, computes generators of the lower central series of G. As an application, we show that the constructive membership problem in G can be performed in polynomial time. This draft is intended to be part of a larger project investigating the complexity of computational problems in finitely generated linear groups. %Z Mon, 04 Mar 1996 19:30:57 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-05.ps.Z %A Hong, Tao %T Degraded Text Recognition using Visual and Linguistic Context %R 96-05 %D March 27, 1996 %I Department of Computer Science, SUNY Buffalo %K OCR, Document Recognition, Contextual Analysis %Y I.5 PATTERN RECOGNITION %X To improve the performance of an OCR system on degraded images of text, postprocessing techniques are critical. The objective of postprocessing is to correct errors or to resolve ambiguities in OCR results by using contextual information. Depending on the extent of context used, there are different levels of postprocessing. In current commercial OCR systems, word-level postprocessing methods, such as dictionary-lookup, have been applied successfully. However, many OCR errors cannot be corrected by word-level postprocessing. To overcome this limitation, passage-level postprocessing, in which global contextual information is utilized, is necessary. This thesis addresses problems in degraded text recognition and discusses potential solutions through passage-level postprocessing. The objective is to develop a postprocessing methodology from a broader perspective. In this work, two classes of inter-word contextual constraints, visual constraints and linguistic constraints, are exploited extensively. Given a text page with hundreds of words, many word image instances can be found visually similar. Formally, six types of visual inter-word relations are defined. Relations at the image level must be consistent with the relations at the symbolic level if word images in the text have been interpreted correctly. Based on the fact that OCR results often violate this consistency, methods of visual consistency analysis are designed to detect and correct OCR errors. Linguistic knowledge sources such as lexicography, syntax, and semantics, can be used to detect and correct OCR errors. Here, we focus on the word candidate selection problem. In this approach an OCR provides several alternatives for each word and the objective of postprocessing is to choose the correct decision among these choices. Two approaches of linguistic analysis, statistical and structural, are proposed for the problem of candidate selection. A word-collocation-based relaxation algorithm and a probabilistic lattice parsing algorithm are proposed. There exist some OCR errors which are not easily recoverable by either visual consistency analysis or linguistic consistency analysis. Integration of image analysis and language-level analysis provides a natural way to handle difficult words. %Z Wed, 27 Mar 1996 14:20:00 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-06.ps.Z %A Gollapudi, Sreenivas %A Zhang, Aidong %T NetMedia: A Client-Server Distributed Multimedia Database Environment %R 96-06 %D April 04, 1996 %I Department of Computer Science, SUNY Buffalo %X Advances in multimedia computing technologies offer new approaches to support on-line accesses to information from a variety of sources such as video clips, audio, images, and books. A client-server distributed multimedia system would be a practical approach to support such functionalities. In this paper, we present the design and implementation of a client-server distributed multimedia database environment that can be used to support large digital libraries. System architecture and design are described. Server functionalities, including client scheduling, data buffering and admission control, are investigated. A client request can only be admitted if both the quality-of-service (QoS) requirements from the client and the upper bound on total buffer consumption at the server are maintained. %Z Tue, 09 Apr 1996 19:59:15 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-08.ps.Z %A Dabholkar, Vinay P. %A Chakravarty, Sreejit %T Dynamic Stress Tests for ``Narrow Metal Imperfections'' in Full Scan Circuits %R 96-08 %D April 03, 1996 %I Department of Computer Science, SUNY Buffalo %K Stress tests, Dynamic burn-in %X Dynamic stress testing is an important process that is used to improve the reliability of circuits. In this paper, we investigate the problem of computing cyclic sequences which can be used during burn-in of full scan circuits. Using a switching activity model we show that the stress due to electromigration and hot-electron degradation can be accurately modeled. Notwithstanding the difficulty of computing optimal dynamic stress tests, we demonstrate experimentally that efficient heuristics can be developed to compute good dynamic stress tests for full scan circuits. %Z Tue, 09 Apr 1996 20:00:59 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-09.ps.Z %A Min-You Wu %T Scheduling for Large-Scale Parallel Video Servers %R 96-09 %D May 17, 1996 %I Department of Computer Science, SUNY Buffalo %K Video servers, parallel systems, scheduling, load-balancing, admission control %X Parallel video servers are necessary for large-scale video-on-demand and other multimedia systems. This paper addresses the scheduling problem of parallel video servers. We discuss scheduling requirements. Optimal algorithms are presented for conflict-free scheduling, delay minimization, load balancing, and admission control. %Z Fri, 17 May 1996 18:10:35 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-10.ps.Z %A Rapaport, William J. %T How Minds Can Be Computational Systems %R 96-10 %D May 31, 1996 %I Department of Computer Science, SUNY Buffalo %K computationalism, cognitive science, algorthims, heuristics, semiotics %Y I.2.0;I.2.4 %X The proper treatment of computationalism, as the thesis that cognition is computable, is presented and defended. Some arguments of James H. Fetzer against computationalism are examined and found wanting, and his positive theory of minds as semiotic systems is shown to be consistent with computationalism. An objection is raised to an argument of Selmer Bringsjord against one strand of computationalism, viz., that Turing-Test-passing artifacts are persons; it is argued that, whether or not this objection holds, such artifacts will inevitably be persons. %Z Fri, 31 May 1996 15:34:45 GMT %A Hexmoor, Henry %A Meeden, Lisa %T Robolearn 96: An International Workshop on Learning for Autonomous Robots %R 96-11 %D June 10, 1996 %I Department of Computer Science, SUNY Buffalo %K Robots, Machine Learning, Agent Architecture %Y Computer Science; Robotics %X Proceedings of a workshop held in Key West, Florida, USA. May 19-20, 1996. %Z Mon, 10 Jun 1996 20:13:25 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-12.ps.Z %A Boxer, Laurence %A Miller, Russ %A Rau-Chaplin, Andrew %T Some Scalable Parallel Algorithms for Geometric Problems %R 96-12 %D June 14, 1996 %I Department of Computer Science, SUNY Buffalo %K parallel algorithms, computational geometry, scalable algorithms, coarse grained multicomputer, lower envelope %Y G.1.0;I.3.5;C.1.2 %X This paper considers a variety of geometric problems on input sets of size n using a coarse grained multicomputer model consisting of p processors with $\Omega(\frac{n}{p})$ local memory each, where the processors are connected to an arbitrary interconnection network. It introduces efficient scalable parallel algorithms for a number of geometric problems including the rectangle finding problem, the iso-oriented line intersection counting problem, a variety of lower envelope problems, the minimal circle-covering problem, the maximal equally-spaced collinear points problem, and the point set pattern matching problem. All of the algorithms presented are scalable in that they are applicable and efficient over a very wide range of ratios of problem size to number of processors. In addition to the practicality imparted by scalability, these algorithms are easy to implement in that all required communications can be achieved by a small number of calls to standard global routing operations. %Z Fri, 14 Jun 1996 21:01:09 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-13.ps.Z %A Gollapudi, Sreenivas %T A Multithreaded Client-Server Architecture for Distributed Multimedia Systems %R 96-13 %D July 19, 1996 %I Department of Computer Science, SUNY Buffalo %K Synchronization, Distributed Systems, Buffer Management %X Advances in multimedia computing technologies offer new approaches to support on-line accesses to information from a variety of sources such as video clips, audio, images, and books. A client-server distributed multimedia system would be a practical approach to support such functionalities. In this thesis, we present the design and implementation of a client-server distributed multimedia database environment that can be used to support multimedia systems such as large digital libraries. We focus on the problems of playout management and buffer management at the client and buffer management and admission control at the server. Current database management system techniques are insufficient to support the management of multimedia data owing to their time-sampled nature. The extension of database systems to support multimedia applications thus requires new mechanisms to ensure the synchronized presentation of multimedia data streams. In order to flexibly and efficiently present multimedia data streams to users, media streams must be segmented into media objects with time constraints among these objects specified and maintained. We investigate a framework and systematic strategies for supporting the continuous and synchronized retrieval and presentation of multimedia data streams at the client. The delivery of continuous and synchronous multimedia data from a database server to multiple destinations over a network presents new challenges in the area of buffer management. We investigate the principles of buffer management for multimedia data presentations in object-oriented database environments. The primary goal is to minimize the response time of multimedia presentations while ensuring that all continuity and synchronization requirements are satisfied. Minimum buffering requirements to guarantee the continuity and synchrony of the presentation of multimedia data are first proposed. A prefetching technique which satisfies these requirements is then developed. Server functionalities, including client scheduling, data buffering and admission control, are investigated. A client request can only be admitted if both the quality-of-service (QoS) requirements from the client and the upper bound on total buffer consumption at the server are maintained. %Z Fri, 19 Jul 1996 18:30:09 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-14.ps.Z %A Wu, Min-You %A Shu, Wei %T DDE: A Modified Dimension Exchange Method for Load Balancing in k-ary n-cubes %R 96-14 %D March 25, 1996 %I Department of Computer Science, SUNY Buffalo %K load balancing, parallel scheduling, direct method, k-ary n-cubes, dimension exchange %X The dimension exchange method (DEM) was initially proposed as a load-balancing algorithm for the hypercube structure. It has been generalized to k-ary n-cubes. However, the k-ary n-cube algorithm must take many iterations to converge to a balanced state. In this paper, we propose a direct method to modify DEM. The new algorithm, Direct Dimension Exchange (DDE) method, takes load average in every dimension to eliminate unnecessary load exchange. It balances the load directly without iteratively exchanging the load. It is able to balance the load more accurately and much faster. %Z Sun, 21 Jul 1996 18:50:03 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-18.ps.Z %A Chalupsky, Hans %T SIMBA: Belief Ascription by Way of Simulative Reasoning %R 96-18 %D January 31, 1996 %I Department of Computer Science, SUNY Buffalo %K belief reasoning, agent incompleteness, cognitive modeling, nonmonotonic reasoning %Y Nonmonotonic reasoning and belief revision; cognitive simulation %X A key cognitive faculty that enables humans to communicate with each other is their ability to incrementally construct and use models describing the mental states of others, in particular, models about their beliefs. Not only do humans have beliefs about the beliefs of others, they can also reason with these beliefs even if they do not hold them themselves. If we want to build an artificial or computational cognitive agent that is similarly capable, we need a formalism that is fully adequate to represent the beliefs of other agents, and that also specifies how to reason with them. Standard formalizations of knowledge or belief, in particular, the various epistemic and doxastic logics, seem to be not very well suited to serve as the formal device upon which to build an actual computational agent. They either neglect representation problems, or the reasoning aspect, or the defeasibility that is inherent in the reasoning about somebody else's beliefs, or they use idealizations which are problematic when confronted with realistic agents. Our main result is the development of SIMBA, an actually implemented belief reasoning engine that uses simulative reasoning to reason with and about the beliefs of other agents. SIMBA is built upon SL, a fully intensional, subjective logic of belief which is representationally and inferentially adequate to serve as one of the main building blocks of an artificial cognitive agent. SL can handle agents that do not believe the consequential closure of their base beliefs, or even agents which have inconsistent beliefs, differently put, it can handle the beliefs of realistic agents. It is also adequate to model introspection, it facilitates belief revision, and it has a more intuitive semantics than standard formalizations of belief. %Z Fri, 18 Oct 1996 13:41:22 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-19.ps.Z %A Rapaport, William J. %T Cognitive Science %R 96-19 %D October 29, 1996 %I Department of Computer Science, SUNY Buffalo %K cognitive science, artificial intelligence %Y I.2.0 %X This is a draft of the long version of an article to appear in Anthony Ralston, Edwin D. Reilly, and David Hemmindinger (eds.), _Encyclopedia of Computer Science, 4th edition_ (New York: Van Nostrand Reinhold, 1998). It is a revised and updated version of Rapaport, William J. (1993), ``Cognitive Science'', in Anthony Ralston & Edwin D. Reilly (eds.), _Encyclopedia of Computer Science, 3rd edition_ (New York: Van Nostrand Reinhold): 185-189, which is an abridged version of Rapaport, William J. (1990), ``Cognitive Science'', _Technical Report 90-12_ (Buffalo: SUNY Buffalo Department of Computer Science). Comments on this draft are eagerly solicited. This document is _Technical Report 96-19_ (Buffalo: SUNY Buffalo Department of Computer Science) and _Technical Report 96-4_ (Buffalo: SUNY Buffalo Center for Cognitive Science). %Z Tue, 29 Oct 1996 19:15:50 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-21.ps.Z %A Belanger, Jay %A Pavan, A. %A Wang, Jie %T Reductions Do Not Preserve Fast Convergence Rates in Average Time %R 96-21 %D November 07, 1996 %I Department of Computer Science, SUNY Buffalo %X Cai and Selman proposed a general definition of average computation time that, when applied to polynomials, results in a modification of Levin's notion of average-polynomial-time. The effect of the modification is to control the rate of convergence of the expressions that define average computation time. With this modification, they proved a hierarchy theorem for average-time complexity that is as tight as the Hartmanis-Stearns hierarchy theorem for worst-case deterministic time. They also proved that under a fairly reasonable condition on distributions, called condition W, a distributional problem is solvable in average-polynomial-time under the modification exactly when it is solvable in average-polynomial-time under Levin's definition. Various notions of reductions, as defined by Levin and others, play a central role in the study of average-case complexity. However, the class of distributional problems that are solvable in average-polynomial-time under the modification is not closed under the standard reductions. In particular, we prove that there is a distributional problem that is not solvable in average-polynomial-time under the modification but is reducible, by the identity function, to a distributional problem that is, and whose distribution even satisfies condition W. %Z Thu, 07 Nov 1996 20:15:52 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-22.ps.Z %A Cai, Jin-Yi %A Samuthiram, Karthikeyan %T A note on the Pumping Lemma for regular languages %R 96-22 %D December 04, 1996 %I Department of Computer Science, SUNY Buffalo %K pumping lemma, regular language, Myhill Nerode theorem %X We discuss the non-sufficiency of pumping lemma for regularity of languages. %Z Thu, 12 Dec 1996 19:32:50 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-23.ps.Z %A Wu, Min-You %T Scheduling for Interactive Operations in Parallel Video Servers %R 96-23 %D December 12, 1996 %I Department of Computer Science, SUNY Buffalo %K parallel video servers, real-time scheduling, interactive operations, fast forward, Quality-of-Service. %X Providing efficient support for interactive operations such as fast-forward and fast-backward is essential in video-on-demand and other multimedia server systems. In this paper, we present two basic approaches for scheduling interactive operations, the prefetching approach and the grouping approach. Scheduling algorithms are presented for both fine-grain and coarse-grain data blocks. These algorithms can precisely schedule video streams for both normal playout and interactive operations. %Z Thu, 12 Dec 1996 19:35:58 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-20.ps.Z %A Chakravarty, Sreejit %T Defect Detection Capability of Delay Tests for Path Delay Faults %R 96-20 %D November 07, 1996 %I Department of Computer Science, SUNY Buffalo %Y B.1.3 %X Defects cause faulty static logic behaviour, faulty dynamic logic behaviour and elevated IDDQ level. Recent empirical and simulation studies show that adding atspeed tests to test suites detects defective ICs missed by slow speed and IDDQ tests. Delay tests for path delay faults and transition tests are two classes of tests that are often used for isolating ICs with faulty dynamic logic behaviour. We show that both these classes of tests often fail to detect many defects that causes faulty dynamic logic behaviour. This implies that computing at speed tests is fundamentally different from computing delay tests for parametric testing and new techniques need to be developed. %Z Thu, 12 Dec 1996 19:39:20 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-24.ps.Z %A Chang, W. %A Murthy, D. %A Mei, Y. %A Zhang, A. %T Metadatabase and Search Agent for Multimedia Database Access over Internet %R 96-24 %D December 16, 1996 %I Department of Computer Science, SUNY Buffalo %K Various multimedia repositories are now distributed throughout the Internet and can be accessed via the World Wide Web tools. In such an environment, a challenging problem is to find the multimedia databases at distributed locations that are the most relevant to the user query. In this paper, we investigate approaches to the creation of a metadatabase and design of a search agent in a metaserver which supports the integrated access to various multimedia databases. The creation of the metadatabase formulates the metadata on the types of media data each multimedia database at a remote site houses and its query capabilities. The design of the search agent at metaserver accesses the metadatabase and control the distribution of user queries to relevant database sites through the site server. The search agent also directs interactive dialogue between the client and multimedia databases. The proposed metadatabase and search agent is deMetadata, WWW, resource discovery %X Various multimedia repositories are now distributed throughout the Internet and can be accessed via the World Wide Web tools. In such an environment, a challenging problem is to find the multimedia databases at distributed locations that are the most relevant to the user query. In this paper, we investigate approaches to the creation of a metadatabase and design of a search agent in a metaserver which supports the integrated access to various multimedia databases. The creation of the metadatabase formulates the metadata on the types of media data each multimedia database at a remote site houses and its query capabilities. The design of the search agent at metaserver accesses the metadatabase and control the distribution of user queries to relevant database sites through the site server. The search agent also directs interactive dialogue between the client and multimedia databases. The proposed metadatabase and search agent is designed and implemented in a web-based multimedia information retrieval environment. %Z Tue, 31 Dec 1996 22:18:27 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/96-25.ps.Z %A Johnson, T. %A Zhang, A. %T A Framework for Supporting Quality-Based Presentation of Continuous Multimedia Streams %R 96-25 %D December 16, 1996 %I Department of Computer Science, SUNY Buffalo %K Multimedia, presentation, %X In this paper, we investigate a framework for supporting the continuous and synchronized presentation of multimedia streams in a distributed multimedia system. Assuming that the network transmission and the server provide sufficient support for delivering media objects, important issues regarding the enforcement of smooth presentations of multimedia streams must be addressed at client sites. We develop various presentation scheduling algorithms that are adaptable to quality-of-service parameters. The significance of the proposed algorithms is to adjust the presentation from its insynchrony by gradually catching up or slowing down rather than using skipping/pausing in an abrupt fashion. Experimental analysis conducted for the proposed approaches demonstrates that our algorithms can avoid hiccups in presentations. This framework can be readily used to facilitate various applications, including education and training. %Z Tue, 31 Dec 1996 22:18:31 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/97-01.ps.Z %A Pavan, A. %A Selman, Alan L. %T Complete Distributional Problems, Hard Languages, and Resource-Bounded Measure %R 97-01 %D February 06, 1997 %I Department of Computer Science, SUNY Buffalo %K average time complexity, resource-bounded measure, complete distributional problems, bi-immune %X Cai and Selman \cite{CaiSelman96} defined a modification and extension of Levin's notion of average polynomial time to arbitrary time-bounds and proved that if $L$ is P-bi-immune, then for every polynomial-time computable distribution $\mu$, the distributional problem $(L, \mu)$ is not polynomial on the $\mu$-average. We prove the following consequences of the hypothesis that the $p$-measure of NP is not 0: 1. There is a language $L$ that is not P-bi-immune and for every polynomial-time computable distribution $\mu$, the distributional problem $(L, \mu)$ is not polynomial on the $\mu$-average. 2. For every DistNP-complete distributional problem $(L, \mu)$, there exists a constant $s \geq 0$ such that $\mu( \{ x ~|~ |x| \geq n \} ) = \Omega(\frac{1}{n^s})$. That is, every DistNP-complete problem has a reasonable distribution. %Z Thu, 06 Feb 1997 16:28:15 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/97-02.ps.Z %A Slavik, Petr %T The Errand Scheduling Problem %R 97-02 %D March 14, 1997 %I Department of Computer Science, SUNY Buffalo %K Approximation Algorithms, Combinatorial Optimization, Errand Scheduling, Generalized Steiner Tree, Generalized Traveling Salesman, LP Relaxation, Set Cover, Traveling Salesman, Tree Stripping %Y G.2.2; F.2.2; %X We consider the following natural errand scheduling problem (ESP). We must perform a set of errands at the nodes of an edge-weighted graph, and each errand can only be performed at a subset of the nodes. What is the shortest tour that allows us to complete all the errands? We also consider the closely related tree cover problem (TCP), in which we seek a tree of minimum length that ``covers'' all errands. Both problems generalize a number of well-known problems in combinatorial optimization and have a wide range of applications including job scheduling on multi-state CNC machines and VLSI design. We focus on the case in which the graph is a weighted tree; this trivially generalizes the famous set cover problem. Under the assumption that no errand can be performed in ``too many'' nodes, we obtain an algorithm that asymptotically matches the best possible approximation ratio for set cover and approximates both errand scheduling and tree cover within O(log m), where m is the total number of errands. Our algorithm is based on ``tree stripping''--a technique specifically designed to round the solution to the LP relaxation of tree cover, but is hopefully useful for approximating other related problems. In the second part of the paper, we discuss the errand scheduling and tree cover problems on a general weighted graph with the restriction that each errand can be performed at at most r nodes. We show that in this case, ESP can be approximated to within 3r/2 and TCP to within r. %Z Wed, 26 Mar 1997 21:09:39 GMT %A Hexmoor, Henry %T ROBOLEARN 97: An International Workshop on Evaluating Robot Learning %R 97-03 %D April 12, 1997 %I Department of Computer Science, SUNY Buffalo %K Machine learning, Robotics, Autonomous agnets %X This volume is the proceedings of the meeting on May 13, 1997 held in in Daytona, Florida, USA in conjunction with FLAIRS symposium. There is no online version. You must order this from the Dept of CS at the University at Buffalo. %Z Tue, 15 Apr 1997 14:51:03 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/97-04.ps.Z %A Sheikholeslami, Gholamhosein %A Zhang, Aidong %T A Clustering Approach for Large Visual Databases %R 97-04 %D February 21, 1997 %I Department of Computer Science, SUNY Buffalo %K Image clustering %X The representation and organization of images in a large-scale image database is crucial to support effective and efficient accesses to image data on the basis of content. In this process, significant features must first be extracted from image data in their pixel format. These features must then be classified and indexed to assist efficient retrieval of image content. In this paper, we investigate effective image data representation and organization approaches for large-scale image databases. An effective block-oriented image decomposition structure is used as a fundamental data model for representing image content. A clustering mechanism is then proposed to categorize images based on feature similarity. Efficient image retrieval is supported. Experimental analysis are conducted and presented to demonstrate the effectiveness and efficiency of the approaches. %Z Tue, 15 Apr 1997 14:51:51 GMT %A Walters, Deborah %A Li, Yiming %A Milun, Elyse %A Atanacio, Bemina %T General Ribbons: A Model for Stylus Generated Images %R 97-05 %D May 15, 1997 %I Department of Computer Science, SUNY Buffalo %K General Ribbons, Thinning, Stylus Generated Images, Segmentation %X General ribbons are a mathematical model of stylus generated images which is based on the image formation process. One purpose of the model is to provide a formal basis for the development of thinning algorithms for the stroke components of stylus generated images. Before the stroke components can be thinned they must be segmented from the blob components of the image which do not require thinning. A second purpose of the model is to provide a formal basis for the development of blob/stroke segmentation algorithms. Two blob/stroke segmentation algorithms based on the model are presented. %Z Fri, 16 May 1997 16:45:08 GMT %A Walters, Deborah %A Li, Yiming %A Wright, Ron %T General Ribbons: Generic Blob/stroke Segmentation %R 97-07 %D May 16, 1997 %I Department of Computer Science, SUNY Buffalo %K General Ribbons, Segmentation, Thinning, Stylus-generated Images %X General ribbons are a mathematical model of stylus generated images which is based on the image formation process. One purpose of the model is to provide a formal basis for the development of thinning algorithms for the stroke components of stylus generated images. Before the stroke components can be thinned they must be segmented from the blob components of the image which do not require thinning. A second purpose of the model is to provide a formal basis for the development of blob/stroke segmentation algorithms. This technical report contains the definitions, theorems and proofs on which the generic blob/stroke segmentation algorithm is based. A discussion of the General Ribbon model and other blob/stroke segmentation algorithms can be found in: Walters, Deborah; Li, Yiming; Milun, Elyse; Atanacio, Bemina. General Ribbons: A Model for Stylus Generated Images, CS-UB Technical Report 97-05, May 15, 1997. %Z Tue, 27 May 1997 18:52:27 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/97-08.cogsci.tr.ps.Z %A Ehrlich, Karen %A Rapaport, William J. %T A Computational Theory of Vocabulary Expansion %R 97-08 %D May 5, 1997 %I Department of Computer Science, SUNY Buffalo %K computational linguistics, natural language learning, semantic networks, knowledge representation %X As part of an interdisciplinary project to develop a computational cognitive model of a reader of narrative text, we are developing a computational theory of how natural-language-understanding systems can automatically expand their vocabulary by determining from context the meaning of words that are unknown, misunderstood, or used in a new sense. `Context' includes surrounding text, grammatical information, and background knowledge, but no external sources. Our thesis is that the meaning of such a word *can* be determined from context, can be *revised* upon further encounters with the word, ``*converges*'' to a dictionary-like definition if enough context has been provided and there have been enough exposures to the word, and eventually ``*settles down*'' to a ``steady state'' that is always subject to revision upon further encounters with the word. The system is being implemented in the SNePS knowledge-representation and reasoning system. This document is a slightly modified version (containing the algorithms) of that which is to appear in _Proceedings of the 19th Annual Conference of the Cognitive Science Society (Stanford University)_ (Mahwah, NJ: Lawrence Erlbaum Associates). It is also _Technical Report 97-2_ (Buffalo: SUNY Buffalo Center for Cognitive Science). It is also available on-line at . %Z Tue, 27 May 1997 19:10:31 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/97-09.ps.Z %A Rajiv Chopra %T An Architecture for Exploiting Qualitative Scene-specific Context in High-Level Computer Vision %R 97-09 %D June 1, 1997 %I Department of Computer Science, SUNY Buffalo %K constraint satisfaction; computer vision; interval arithmetic; context based vision; image understanding %X In this dissertation we present an architecture to incorporate collateral information in the high level computer vision task of object location. The declarative specification of the scene hypothesis and the domain independent control algorithm that exploits spatial information to drive the vision process, are two major contributions of this dissertation. This work has been theoretically grounded in constraint satisfaction algorithms. Apart from the application of standard CSP algorithms, several new techniques in constraint satisfaction have been developed. We provide an algorithm that performs consistency filtering for certain types of non-binary constraints. We also present an innovative application of interval arithmetic in constraint satisfaction. Though interval constraint satisfaction is an extensive research area, we believe this to be the first attempt at using interval arithmetic to reduce the domain generation cost of finite domain CSPs. A new algorithm has been proposed here for the above. Illustration with a simple example, analysis and implementation of the algorithm have also been detailed here. We have demonstrated the architecture in three implemented vision systems and have shown that it is simple yet powerful for application in several domains. %Z Tue, 22 Jul 1997 17:16:42 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/97-10.ps.Z %A Cai, Jin-Yi %A Nerurkar, Ajay %A Wu, Min-You %T The Design of Uncheatable Benchmarks Using Complexity Theory %R 97-10 %D July 18, 1997 %I Department of Computer Science, SUNY Buffalo %K benchmarks, complexity theory %X We study uncheatable benchamrks. We present some schemes of their design and implementations. We present two methods to obtain computational advantage in judging performance results. In the first approach, the tester gains a computational advantage via a hidden ``secret,'' which is randomly generated. With the secret information, the tester can know, or can easily compute, the correct answer in advance. This approach is called the Secret Key (SK) method. Secondly, for many problems, verifying the result is easier than computing it. For example, verifying the result of a linear equation solver can be accomplished faster than any known algorithm which computes it. This asymmetry allows the tester an computational advantage over the vendor. This approach is called the Result Verification (RV) method. %Z Tue, 22 Jul 1997 17:19:28 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/97-11.ps.Z %A Fang, Chi %T Deciphering Algorithms for Degraded Document Recognition %R 97-11 %D July 17, 1997 %I Department of Computer Science, SUNY Buffalo %K OCR, document recognition, pattern recognition, deciphering %Y pattern recognition; text processing %X The research presented in this thesis provides new solutions to two fundamental problems in document recognition. The first problem is character segmentation. Touching characters and character fragmentation caused by image degradation are difficult problems for current document recognition systems. The second problem that this thesis addresses is the dependence of today's document recognition systems on extensive font training. These two problems are shown to be the main reasons that cause performance breakdown for most of today's commercial document recognition systems. Our research provides solutions to the two problems by seeking alternative approaches that can recognize degraded documents with high accuracy and robust performance. Reliable performance on degraded documents is achieved by avoiding these two difficult problems in the recognition approaches. We propose to consider the computational task of document recognition as a process of finding the mapping between the visual patterns in the input document and their linguistic identities. Deciphering algorithms are proposed to decode the mapping by combining the visual constraints from the input document and the various levels of linguistic constraints from a language base. This deciphering approach to document recognition works on different levels of language granularities, both on the character level as well as on the word level. Therefore, document recognition can be achieved by decoding characters using character level linguistic constraints. It can also be achieved by the direct decryption of words using word level linguistic constraints. A modified character-level deciphering algorithm is proposed based on an existing research. The modifications to an original algorithm extend a character-level deciphering algorithm to documents that contain touching characters. It also provides a solution to errors in character pattern clustering. A word-level deciphering approach to document recognition is also proposed in this thesis. Word-level language constraints are used to first decrypt the words from a selected portion of the input text that has relatively more reliable word n-gram statistics. Word decryption is considered as a process of constraint satisfaction and is implemented by a probabilistic relaxation algorithm. After the decryption of words in a selected portion of the input text, font information is learned and then used to re-recognize the rest of the input text which was not used for deciphering. The re-recognition of the rest of the input text is achieved by a hypothesis testing algorithm. The applicability of the proposed deciphering approaches to practical document recognition tasks is demonstrated by experimental results of applying the proposed approaches to scanned documents. The test documents used for the experiments contain image degradations including touching characters, character fragmentation, as well as individual character deformations. %Z Tue, 22 Jul 1997 17:21:35 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/97-12.ps.Z %A Hexmoor, Henry %A Cuddihy, Elisabeth %T Performance of a simple cooperative individual situation assessment (CISA) with respect to information sharing strategy metrics %R 97-12 %D July 23, 1997 %I Department of Computer Science, SUNY Buffalo %K multi-agent systems, information sharing %Z Wed, 23 Jul 1997 14:16:29 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/97-13.ps.Z %A Naik, Ashish V. %A Rogers, John, D. %A Royer, James S. %A Selman, Alan L. %T A Hierarchy Based on Output Multiplicity %R 97-13 %D August 05, 1997 %I Department of Computer Science, SUNY Buffalo %K multivavlued functions, nondeterminism, function classes, output multiplicity %X The class NP$k$V consists of those partial, multivalued functions that can be computed by a nondeterministic, polynomial time-bounded transducer that has at most $k$ distinct values on any input. We define the {\em output-multiplicity hierarchy\/} to consist of the collection of classes NP$k$V, for all positive integers $k \geq 1$. In this paper we investigate the strictness of the output-multiplicity hierarchy and establish three main results pertaining to this: 1. If for any $k>1$, the class NP$k$V collapses into the class NP$(k - 1$V, then the polynomial hierarchy collapses to its second level, $\Sigma_2^P$. 2. If the converse of the above result is true, then any proof of this converse cannot relativize. We exhibit an oracle relative to which the polynomial hierarchy collapses to $P^NP$, but the output-multiplicity hierarchy is strict. 3. Relative to a random oracle, the output-multiplicity hierarchy is strict. This result is in contrast to the still open problem of the strictness of the polynomial hierarchy relative to a random oracle. In introducing the technique for the third result we prove a related result of interest: relative to a random oracle $\UP\not=\NP$. %Z Tue, 05 Aug 1997 16:21:03 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/98-01.ps.Z %A Shapiro, Stuart C. %T A Procedural Solution to the Unexpected Hanging and Sorites Paradoxes %R 98-01 %D January 05, 1998 %I Department of Computer Science, SUNY Buffalo %K paradoxes, unexpected hangman, prediction paradoxes, sorites, parallel procedures, procedures %X The paradox of the Unexpected Hanging, related prediction paradoxes, and the sorites paradoxes all involve reasoning about ordered collections of entities: days ordered by date in the case of the Unexpected Hanging; men ordered by the number of hairs on their heads in the case of the bald man version of the sorites. The reasoning then assigns each entity a value that depends on the previously assigned value of one of the neighboring entities. The final result is paradoxical because it conflicts with the obviously correct, commonsensical value. The paradox is due to the serial procedure of assigning a value based on the newly assigned value of the neighbor. An alternative procedure is to assign each value based only on the original values of neighbors---a parallel procedure. That procedure does not give paradoxical answers. %Z Mon, 09 Mar 1998 16:36:35 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/98-02.ps.Z %A Campbell, Alistair E. %A Shapiro, Stuart C. %T Algorithms for Ontological Mediation %R 98-02 %D January 23, 1998 %I Department of Computer Science, SUNY Buffalo %K ontology, agent, miscommunication, mediation, interlingua %X We lay the foundation for ontological mediation as a method for resolving communication difficulties resulting from different ontologies. The notion of hierarchical relations enables a theory of orientation or direction of ontologies to be presented. We describe an ontologcial mediator as being able to think about (or conceptualize) concepts from ontologies and find equivalences between them. Algorithms for finding the meanings of unfamiliar words by asking questions are introduced and evaluated experimentally. %Z Mon, 09 Mar 1998 16:36:42 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/97-14.ps.Z %A Hexmoor, H %A Lopez, F %T Toward Object Selection with a Pointer %R 97-14 %D October 30, 1997 %I Department of Computer Science, SUNY Buffalo %K Object-Selection, Deixis, %X Using a pointer as means of disambiguating objects in the world is discussed. We offer a visual object selection algorithm and a technique for computing pointer effectiveness. We report on relevant assumptions and directions for a research path. %Z Mon, 09 Mar 1998 16:40:55 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/98-03.ps.Z %A Soh, Jung %T A Theory of Document Object Locator Combination %R 98-03 %D March 25, 1998 %I Department of Computer Science, SUNY Buffalo %K document analysis, object location, address block location, combination, Korean mail %X Traditional approaches to document object location use a single locator that is expected to locate as many instances of the object class of interest as possible. However, if the class includes subclasses with diverse visual characteristics or is not characterized by easily computable visual features, it is difficult for the single locator to account for wide variation in object characteristics within the class. As a result, increasingly complex models of objects to be located are used. An alternative approach is to combine the decisions of multiple locators, each of which is suitable for certain image conditions. This approach utilizes a collection of simple locators that complement one another, rather than relying on one complex locator. An effective method for combining the location results is vital to the success of this approach. This thesis presents a theory of combining the results of multiple document object locators tuned to different object characteristics. The purpose of the combination is to achieve integrated location performance which is better than that of any individual locator. Location results are represented by objective confidence values, regardless of their types. This allows for combining locators at any output information level. Since different locators use different segmentation algorithms, multiple decisions on a single object are not readily available. Object equivalence and correspondence are used to determine what decisions to combine. The highest confidence, confidence summation, weighted confidence summation, probabilistic confidence estimation, and fuzzy integral methods are proposed for combining confidence values. A method for dynamic selection of locators based on training image set partitioning is proposed. Differences in locator performance levels are represented by locator relevance values, which are used in a combination method and dynamic selection. Measures for evaluating combination performance are proposed and compared. The theory is applied to the address block location problem. A multiple locator algorithm employing a machine-printed address block locator, a handwritten address block locator, and a clustering-based locator is created and tested on a database of 1,100 mail piece images, in two separate experiments. The experimental results show that combining location results using the theory provides location performance that excels any individual locator performance, and therefore is a viable approach to document object location problems. %Z Mon, 13 Apr 1998 13:33:35 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/98-04.ps.Z %A Hexmoor, H %T Representing and Learning Routine Activities %R 98-04 %D December 1995 %I Department of Computer Science, SUNY Buffalo %K Knowledeg representation, robotics, intelligent agents %X A routine is a habitually repeated performance of some actions. Agents use routines to guide their everyday activities and to enrich their abstract concepts about acts. This dissertation addresses the question of how an agent who is engaged in ordinary, routine activities changes its behavior over time, how the agent's internal representations about the world is affected by its interactions, and what is a good agent architecture for learning routine interactions with the world. In it, I develop a theory that proposes several key processes: (1) automaticity, (2) habituation and skill refinement, (3) abstraction-by-chunking, and (4) discovery of new knowledge chunks. The process of automaticity caches the agent's knowledge about actions into a flat stimulus-response data structure that eliminates knowledge of action consequences. The stimulus-response data structure produces a response to environmental stimuli in constant time. The process of habituation and skill refinement uses environmental clues as rewards. Rewards are used to develop a bias for the agent's actions in competing action-situation pairs where the agent did not have prior choices. The process of abstraction-by-chunking monitors the agent's increased reliance on stimulus-response data structures and turns the agent's complex actions into primitive acts, thereby eliminating the need for the agent to elaborate its plans beyond a certain level. The process of discovering knowledge-chunks monitors the agent's use of stimulus-action data structures and constructs knowledge about action preconditions and consequences and constructs patterns of interactions into plans. I have implemented several agents that demonstrate parts of my theory using an agent architecture I developed called GLAIR. GLAIR models agents that function in the world. Beyond my theory about routines, GLAIR is used to demonstrate situated actions as well as deliberative actions. Each of GLAIR's three levels (Sensori-actuator, Perceptuo-motor, and Knowledge) uses different representations and models different components of intelligent agency. GLAIR levels operate semi-autonomously. Whereas intra-level learning improves an agent's performance, inter-level learning migrates know-how from one level to another. Using the GLAIR architecture, I model agents that engage in routines, use these routines to guide their behavior, and learn more concepts having to do with acts. The significance of my theory, architectures, and the agents are to illustrate that computational agents can autonomously learn know-how from their own routine interactions in the world as well as learn to improve their performance. %Z Fri, 22 May 1998 15:34:33 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/97-15.ps.Z %A Rapaport, William J. %T Implementation Is Semantic Interpretation %R 97-15 %D November 21, 1997 %I Department of Computer Science, SUNY Buffalo %K implementation, individuation, instantiation, reduction, supervenience, semantic interpretation %X What is the computational notion of ``implementation''? It is not individuation, instantiation, reduction, or supervenience. It is, I suggest, semantic interpretation. %Z Mon, 20 Jul 1998 14:05:05 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/98-06.ps.Z %A Slavik, Petr %T Approximation Algorithms for Set Cover and Related Problems %R 98-06 %D April 30, 1998 %I Department of Computer Science, SUNY Buffalo %K Approximation Algorithms, Combinatorial Optimization, Errand Scheduling, Generalized Traveling Salesman Problem, Group Steiner Tree Problem, Partial Set Cover, Set Cover, Tree Cover Problem %X In this thesis, we analyze several known and newly designed algorithms for approximating optimal solutions to NP-hard optimization problems. We give a new analysis of the greedy algorithm for approximating the SET COVER and PARTIAL SET COVER problems obtaining significantly improved performance bounds. We also give a first approximation algorithm with a non-trivial performance bound for the ERRAND SCHEDULING and TREE COVER problems, known also as the GENERALIZED TRAVELING SALESMAN and GROUP STEINER TREE problems. The main results of this thesis first appeared in my papers [87], [89], [91], and [90]; and in my technical reports [86] and [88]. %Z Mon, 20 Jul 1998 14:08:48 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/98-05.ps.Z %A Rapaport, William J. %A Ehrlich, Karen %T A Computational Theory of Vocabulary Acquisition %R 98-05 %D April 14, 1998 %I Department of Computer Science, SUNY Buffalo %K computational linguistics, vocabulary acquisition, machine learning %X As part of an interdisciplinary project to develop a computational cognitive model of a reader of narrative text, we are developing a computational theory of how natural-language-understanding systems can automatically acquire new vocabulary by determining from context the meaning of words that are unknown, misunderstood, or used in a new sense. `Context' includes surrounding text, grammatical information, and background knowledge, but no external sources. Our thesis is that the meaning of such a word *can* be determined from context, can be *revised* upon further encounters with the word, ``*converges*'' to a dictionary-like definition if enough context has been provided and there have been enough exposures to the word, and eventually ``*settles down*'' to a ``steady state'' that is always subject to revision upon further encounters with the word. The system is being implemented in the SNePS knowledge-representation and reasoning system. This essay is forthcoming as a chapter in Iwanska, Lucja, & Shapiro, Stuart C. (1999), _Natural Language Processing and Knowledge Representation: Language for Knowledge and Knowledge for Language_ (Menlo Park, CA/Cambridge, MA: AAAI Press/MIT Press). %Z Mon, 20 Jul 1998 14:10:55 GMT %U ftp://ftp.cs.buffalo.edu/pub/tech-reports/98-07.ps.Z %A Sheikholeslami, Gholamhosein %A Wang, Wenjie %A Zhang, Aidong %T A Model of Image Representation and Indexing in Image Database Systems %R 98-07 %D July 20, 1998 %I Department of Computer Science, SUNY Buffalo %K content-based image retrieval, image decomposition %X Image database systems need to be built to support effective and efficient access to image data on the basis of content. In this process, significant features must first be extracted from image data in their pixel format. These features must then be classified and indexed to assist efficient retrieval of image content. However, the issues central to automatic extraction and indexing of image content remain largely an open problem. In this paper, we investigate effective block-oriented image decomposition structures to be used as a fundamental data model for representing image content in large image databases. A hierarchical structure $S^g$-tree is introduced to decompose database images. Each decomposition of an image segment generates n subsegments of equal-size that are uniformly distributed within the image segment. Our analysis illustrates a special case of S^g-tree, nona-tree decomposition, when n equals 9, is an effective and efficient block-oriented decomposition structure to facilitate content-based image retrieval. Using S^g-tree structure to represent image content in an image database, flexible access to partial contents of each database image can be provided. Various types of content-based queries and efficient image retrieval can then be supported through novel indexing and searching approaches. %Z Mon, 20 Jul 1998 14:13:04 GMT