Previous Next 
AdjacencyMatrix RepresentationAn adjacencymatrix representation of a graph is a VbyV matrix of Boolean values, with the entry in row v and column w defined to be 1 if there is an edge connecting vertex v and vertex w in the graph, and to be 0 otherwise. Screenshot depicts an example. Screenshot Adjacencymatrix graph representationThis Boolean matrix is another representation of the graph depicted in Screenshot. It has a 1 (true) in row v and column w if there is an edge connecting vertex v and vertex w and a 0 (false) in row v and column w if there is no such edge. The matrix is symmetric about the diagonal. For example, the sixth row (and the sixth column) says that vertex 6 is connected to vertices 0 and 4. For some apps, we will adopt the convention that each vertex is connected to itself, and assign 1s on the main diagonal. The large blocks of 0s in the upper right and lower left corners are artifacts of the way we assigned vertex numbers for this example, not characteristic of the graph (except that they do indicate the graph to be sparse). Program 17.7 is an implementation of the graph ADT interface that uses a direct representation of this matrix, built as a array of arrays, as depicted in Screenshot. It is a twodimensional existence table with the entry adj[v][w] set to true if there is an edge connecting v and w in the graph, and set to false otherwise. Note that maintaining this property in an undirected graph requires that each edge be represented by two entries: the edge vw is represented by true values in both adj[v][w] and adj[w][v], as is the edge wv. Screenshot Adjacency matrix data structureThis figure depicts the Java representation of the graph in Screenshot, as an array of arrays (with 0 and 1 representing false and true, respectively). This implementation is more suited for dense graphs than for sparse ones, so in some contexts we might wish to explictly distinguish it from other implementations. For example, we might use a wrapper class DenseGraph to make this distinction explicit.
In the adjacency matrix that represents a graph G, row v is an array that is an existence table whose ith entry is true if vertex i is adjacent to v (the edge vi is in G). Thus, to provide clients with the capability to process the vertices adjacent to v, we need only provide code that scans through this array to find true entries, as shown in Program 17.8. We need to be mindful that, with this implementation, processing all of the vertices adjacent to a given vertex requires (at least) time proportional to V, no matter how many such vertices exist. As mentioned in , our interface requires that the number of vertices is known to the client when the graph is initialized. If desired, we could allow for inserting and deleting vertices (see Exercise 17.20). A key feature of the constructor in Program 17.7 is that Java automatically initializes the matrix by setting its entries all to false. We need to be mindful that this operation takes time proportional to V^{2}, no matter how many edges are in the graph. Error checks for insufficient memory are not included in Program 17.7 for brevity—it is prudent coding practice to add them before using this code (see Exercise 17.23). To add an edge, we set the indicated matrix entries to true (one for digraphs, two for undirected graphs). This representation does not allow parallel edges: If an edge is to be inserted for which the matrix entries are already true, the code has no effect. In some ADT designs, it might be preferable to inform the client of the attempt to insert a parallel edge, perhaps using a return code from insert. This representation does allow selfloops: An edge vv is represented by a nonzero entry in a[v][v]. To remove an edge, we set the indicated matrix entries to false. If an edge for which the matrix entries are already false is removed, the code has no effect. Again, in some ADT designs, we might wish inform the client of such a condition. If we are processing huge graphs or huge numbers of small graphs, or space is otherwise tight, there are several ways to save space. For example, adjacency matrices that represent undirected graphs are symmetric: a[v][w] is always equal to a[w][v]. Thus, we could save space by storing only onehalf of this symmetric matrix (see Exercise 17.21). Another way to save a significant amount of space is to use a matrix of bits (assuming that Java does not do so for an array of bit arrays). In this way, for instance, we could represent graphs of up to about 64,000 vertices in about 64 million 64bit words (see Exercise 17.22). These implementations have the slight complication of requiring us to always use the edge method to test for the existence of an edge. (In a simple adjacencymatrix implementation, we can test for the existence of an edge vw by simply testing a[v][w].) Such spacesaving techniques are effective but come at the cost of extra overhead that may fall in the inner loop in timecritical apps. Many apps involve associating other information with each edge—in such cases, we can generalize the adjacency matrix to hold any information whatever, not just booleans. Whatever data type that we use for the matrix elements, we need to include an indication whether the indicated edge is present or absent. In s 20 and 21, we explore such representations. Use of adjacency matrices depends on associating vertex names with integers between 0 and V – 1. This assignment might be done in one of many ways—for example, we consider a program that does so in ). Therefore, the specific matrix of Boolean values that we represent with an array of arrays in Java is but one possible representation of any given graph as an adjacency matrix, because another program might assign different vertex names to the indices we use to specify rows and columns. Two matrices that appear to be markedly different could represent the same graph (see Exercise 17.17). This observation is a restatement of the graph isomorphism problem: Although we might like to determine whether or not two different matrices represent the same graph, no one has devised an algorithm that can always do so efficiently. This difficulty is fundamental. For example, our ability to find an efficient solution to various important graphprocessing problems depends completely on the way in which the vertices are numbered (see, for example, Exercise 17.25). Program 17.3, in , prints out a table with the vertices adjacent to each vertex. When used with the implementation in Program 17.7, it prints the vertices in order of their vertex index, as in Screenshot. Notice, though, that it is not part of the definition of AdjList that it visits vertices in index order, so developing an ADT client that prints out the adjacencymatrix representation of a graph is not a trivial task (see Exercise 17.18). The output produced by these programs are themselves graph representations that clearly illustrate a basic performance tradeoff. To print out the matrix, we need room on the page for all V^{2} entries; to print out the lists, we need room for just V + E numbers. For sparse graphs, when V^{2} is huge compared to V + E, we prefer the lists; for dense graphs, when E and V^{2} are comparable, we prefer the matrix. As we shall soon see, we make the same basic tradeoff when we compare the adjacencymatrix representation with its primary alternative: an explicit representation of the lists. The adjacencymatrix representation is not satisfactory for huge sparse graphs: We need at least V^{2} bits of storage and V^{2} steps just to construct the representation. In a dense graph, when the number of edges (the number of true entries in the matrix) is proportional to V^{2}, this cost may be acceptable, because time proportional to V^{2} is required to process the edges no matter what representation we use. In a sparse graph, however, just initializing the matrix could be the dominant factor in the running time of an algorithm. Moreover, we may not even have enough space for the matrix. For example, we may be faced with graphs with millions of vertices and tens of millions of edges, but we may not want—or be able—to pay the price of reserving space for trillions of false entries in the adjacency matrix. On the other hand, when we do need to process a huge dense graph, then the false entries that represent absent edges increase our space needs by only a constant factor and provide us with the ability to determine whether any particular edge is present in constant time. For example, disallowing parallel edges is automatic in an adjacency matrix but is costly in some other representations. If we do have space available to hold an adjacency matrix, and either V^{2} is so small as to represent a negligible amount of time or we will be running a complex algorithm that requires more than V^{2} steps to complete, the adjacencymatrix representation may be the method of choice, no matter how dense the graph. As usual, an important factor to consider is that the implementation in Programs 17.7 and 17.8 lacks a clone implementation (see Section 4.9). For many apps, this defect could lead to unexpected results or severe performance problems. Such a method is easy to develop for apps where it is needed (see Exercise 17.26). Exercises

Previous Next 