Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.9/dist-packages/networkx/algorithms/summarization.py: 8%

120 statements  

« prev     ^ index     » next       coverage.py v7.3.2, created at 2023-10-20 07:00 +0000

1""" 

2Graph summarization finds smaller representations of graphs resulting in faster 

3runtime of algorithms, reduced storage needs, and noise reduction. 

4Summarization has applications in areas such as visualization, pattern mining, 

5clustering and community detection, and more. Core graph summarization 

6techniques are grouping/aggregation, bit-compression, 

7simplification/sparsification, and influence based. Graph summarization 

8algorithms often produce either summary graphs in the form of supergraphs or 

9sparsified graphs, or a list of independent structures. Supergraphs are the 

10most common product, which consist of supernodes and original nodes and are 

11connected by edges and superedges, which represent aggregate edges between 

12nodes and supernodes. 

13 

14Grouping/aggregation based techniques compress graphs by representing 

15close/connected nodes and edges in a graph by a single node/edge in a 

16supergraph. Nodes can be grouped together into supernodes based on their 

17structural similarities or proximity within a graph to reduce the total number 

18of nodes in a graph. Edge-grouping techniques group edges into lossy/lossless 

19nodes called compressor or virtual nodes to reduce the total number of edges in 

20a graph. Edge-grouping techniques can be lossless, meaning that they can be 

21used to re-create the original graph, or techniques can be lossy, requiring 

22less space to store the summary graph, but at the expense of lower 

23reconstruction accuracy of the original graph. 

24 

25Bit-compression techniques minimize the amount of information needed to 

26describe the original graph, while revealing structural patterns in the 

27original graph. The two-part minimum description length (MDL) is often used to 

28represent the model and the original graph in terms of the model. A key 

29difference between graph compression and graph summarization is that graph 

30summarization focuses on finding structural patterns within the original graph, 

31whereas graph compression focuses on compressions the original graph to be as 

32small as possible. **NOTE**: Some bit-compression methods exist solely to 

33compress a graph without creating a summary graph or finding comprehensible 

34structural patterns. 

35 

36Simplification/Sparsification techniques attempt to create a sparse 

37representation of a graph by removing unimportant nodes and edges from the 

38graph. Sparsified graphs differ from supergraphs created by 

39grouping/aggregation by only containing a subset of the original nodes and 

40edges of the original graph. 

41 

42Influence based techniques aim to find a high-level description of influence 

43propagation in a large graph. These methods are scarce and have been mostly 

44applied to social graphs. 

45 

46*dedensification* is a grouping/aggregation based technique to compress the 

47neighborhoods around high-degree nodes in unweighted graphs by adding 

48compressor nodes that summarize multiple edges of the same type to 

49high-degree nodes (nodes with a degree greater than a given threshold). 

50Dedensification was developed for the purpose of increasing performance of 

51query processing around high-degree nodes in graph databases and enables direct 

52operations on the compressed graph. The structural patterns surrounding 

53high-degree nodes in the original is preserved while using fewer edges and 

54adding a small number of compressor nodes. The degree of nodes present in the 

55original graph is also preserved. The current implementation of dedensification 

56supports graphs with one edge type. 

57 

58For more information on graph summarization, see `Graph Summarization Methods 

59and Applications: A Survey <https://dl.acm.org/doi/abs/10.1145/3186727>`_ 

60""" 

61from collections import Counter, defaultdict 

62 

63import networkx as nx 

64 

65__all__ = ["dedensify", "snap_aggregation"] 

66 

67 

68@nx._dispatch 

69def dedensify(G, threshold, prefix=None, copy=True): 

70 """Compresses neighborhoods around high-degree nodes 

71 

72 Reduces the number of edges to high-degree nodes by adding compressor nodes 

73 that summarize multiple edges of the same type to high-degree nodes (nodes 

74 with a degree greater than a given threshold). Dedensification also has 

75 the added benefit of reducing the number of edges around high-degree nodes. 

76 The implementation currently supports graphs with a single edge type. 

77 

78 Parameters 

79 ---------- 

80 G: graph 

81 A networkx graph 

82 threshold: int 

83 Minimum degree threshold of a node to be considered a high degree node. 

84 The threshold must be greater than or equal to 2. 

85 prefix: str or None, optional (default: None) 

86 An optional prefix for denoting compressor nodes 

87 copy: bool, optional (default: True) 

88 Indicates if dedensification should be done inplace 

89 

90 Returns 

91 ------- 

92 dedensified networkx graph : (graph, set) 

93 2-tuple of the dedensified graph and set of compressor nodes 

94 

95 Notes 

96 ----- 

97 According to the algorithm in [1]_, removes edges in a graph by 

98 compressing/decompressing the neighborhoods around high degree nodes by 

99 adding compressor nodes that summarize multiple edges of the same type 

100 to high-degree nodes. Dedensification will only add a compressor node when 

101 doing so will reduce the total number of edges in the given graph. This 

102 implementation currently supports graphs with a single edge type. 

103 

104 Examples 

105 -------- 

106 Dedensification will only add compressor nodes when doing so would result 

107 in fewer edges:: 

108 

109 >>> original_graph = nx.DiGraph() 

110 >>> original_graph.add_nodes_from( 

111 ... ["1", "2", "3", "4", "5", "6", "A", "B", "C"] 

112 ... ) 

113 >>> original_graph.add_edges_from( 

114 ... [ 

115 ... ("1", "C"), ("1", "B"), 

116 ... ("2", "C"), ("2", "B"), ("2", "A"), 

117 ... ("3", "B"), ("3", "A"), ("3", "6"), 

118 ... ("4", "C"), ("4", "B"), ("4", "A"), 

119 ... ("5", "B"), ("5", "A"), 

120 ... ("6", "5"), 

121 ... ("A", "6") 

122 ... ] 

123 ... ) 

124 >>> c_graph, c_nodes = nx.dedensify(original_graph, threshold=2) 

125 >>> original_graph.number_of_edges() 

126 15 

127 >>> c_graph.number_of_edges() 

128 14 

129 

130 A dedensified, directed graph can be "densified" to reconstruct the 

131 original graph:: 

132 

133 >>> original_graph = nx.DiGraph() 

134 >>> original_graph.add_nodes_from( 

135 ... ["1", "2", "3", "4", "5", "6", "A", "B", "C"] 

136 ... ) 

137 >>> original_graph.add_edges_from( 

138 ... [ 

139 ... ("1", "C"), ("1", "B"), 

140 ... ("2", "C"), ("2", "B"), ("2", "A"), 

141 ... ("3", "B"), ("3", "A"), ("3", "6"), 

142 ... ("4", "C"), ("4", "B"), ("4", "A"), 

143 ... ("5", "B"), ("5", "A"), 

144 ... ("6", "5"), 

145 ... ("A", "6") 

146 ... ] 

147 ... ) 

148 >>> c_graph, c_nodes = nx.dedensify(original_graph, threshold=2) 

149 >>> # re-densifies the compressed graph into the original graph 

150 >>> for c_node in c_nodes: 

151 ... all_neighbors = set(nx.all_neighbors(c_graph, c_node)) 

152 ... out_neighbors = set(c_graph.neighbors(c_node)) 

153 ... for out_neighbor in out_neighbors: 

154 ... c_graph.remove_edge(c_node, out_neighbor) 

155 ... in_neighbors = all_neighbors - out_neighbors 

156 ... for in_neighbor in in_neighbors: 

157 ... c_graph.remove_edge(in_neighbor, c_node) 

158 ... for out_neighbor in out_neighbors: 

159 ... c_graph.add_edge(in_neighbor, out_neighbor) 

160 ... c_graph.remove_node(c_node) 

161 ... 

162 >>> nx.is_isomorphic(original_graph, c_graph) 

163 True 

164 

165 References 

166 ---------- 

167 .. [1] Maccioni, A., & Abadi, D. J. (2016, August). 

168 Scalable pattern matching over compressed graphs via dedensification. 

169 In Proceedings of the 22nd ACM SIGKDD International Conference on 

170 Knowledge Discovery and Data Mining (pp. 1755-1764). 

171 http://www.cs.umd.edu/~abadi/papers/graph-dedense.pdf 

172 """ 

173 if threshold < 2: 

174 raise nx.NetworkXError("The degree threshold must be >= 2") 

175 

176 degrees = G.in_degree if G.is_directed() else G.degree 

177 # Group nodes based on degree threshold 

178 high_degree_nodes = {n for n, d in degrees if d > threshold} 

179 low_degree_nodes = G.nodes() - high_degree_nodes 

180 

181 auxiliary = {} 

182 for node in G: 

183 high_degree_neighbors = frozenset(high_degree_nodes & set(G[node])) 

184 if high_degree_neighbors: 

185 if high_degree_neighbors in auxiliary: 

186 auxiliary[high_degree_neighbors].add(node) 

187 else: 

188 auxiliary[high_degree_neighbors] = {node} 

189 

190 if copy: 

191 G = G.copy() 

192 

193 compressor_nodes = set() 

194 for index, (high_degree_nodes, low_degree_nodes) in enumerate(auxiliary.items()): 

195 low_degree_node_count = len(low_degree_nodes) 

196 high_degree_node_count = len(high_degree_nodes) 

197 old_edges = high_degree_node_count * low_degree_node_count 

198 new_edges = high_degree_node_count + low_degree_node_count 

199 if old_edges <= new_edges: 

200 continue 

201 compression_node = "".join(str(node) for node in high_degree_nodes) 

202 if prefix: 

203 compression_node = str(prefix) + compression_node 

204 for node in low_degree_nodes: 

205 for high_node in high_degree_nodes: 

206 if G.has_edge(node, high_node): 

207 G.remove_edge(node, high_node) 

208 

209 G.add_edge(node, compression_node) 

210 for node in high_degree_nodes: 

211 G.add_edge(compression_node, node) 

212 compressor_nodes.add(compression_node) 

213 return G, compressor_nodes 

214 

215 

216def _snap_build_graph( 

217 G, 

218 groups, 

219 node_attributes, 

220 edge_attributes, 

221 neighbor_info, 

222 edge_types, 

223 prefix, 

224 supernode_attribute, 

225 superedge_attribute, 

226): 

227 """ 

228 Build the summary graph from the data structures produced in the SNAP aggregation algorithm 

229 

230 Used in the SNAP aggregation algorithm to build the output summary graph and supernode 

231 lookup dictionary. This process uses the original graph and the data structures to 

232 create the supernodes with the correct node attributes, and the superedges with the correct 

233 edge attributes 

234 

235 Parameters 

236 ---------- 

237 G: networkx.Graph 

238 the original graph to be summarized 

239 groups: dict 

240 A dictionary of unique group IDs and their corresponding node groups 

241 node_attributes: iterable 

242 An iterable of the node attributes considered in the summarization process 

243 edge_attributes: iterable 

244 An iterable of the edge attributes considered in the summarization process 

245 neighbor_info: dict 

246 A data structure indicating the number of edges a node has with the 

247 groups in the current summarization of each edge type 

248 edge_types: dict 

249 dictionary of edges in the graph and their corresponding attributes recognized 

250 in the summarization 

251 prefix: string 

252 The prefix to be added to all supernodes 

253 supernode_attribute: str 

254 The node attribute for recording the supernode groupings of nodes 

255 superedge_attribute: str 

256 The edge attribute for recording the edge types represented by superedges 

257 

258 Returns 

259 ------- 

260 summary graph: Networkx graph 

261 """ 

262 output = G.__class__() 

263 node_label_lookup = {} 

264 for index, group_id in enumerate(groups): 

265 group_set = groups[group_id] 

266 supernode = f"{prefix}{index}" 

267 node_label_lookup[group_id] = supernode 

268 supernode_attributes = { 

269 attr: G.nodes[next(iter(group_set))][attr] for attr in node_attributes 

270 } 

271 supernode_attributes[supernode_attribute] = group_set 

272 output.add_node(supernode, **supernode_attributes) 

273 

274 for group_id in groups: 

275 group_set = groups[group_id] 

276 source_supernode = node_label_lookup[group_id] 

277 for other_group, group_edge_types in neighbor_info[ 

278 next(iter(group_set)) 

279 ].items(): 

280 if group_edge_types: 

281 target_supernode = node_label_lookup[other_group] 

282 summary_graph_edge = (source_supernode, target_supernode) 

283 

284 edge_types = [ 

285 dict(zip(edge_attributes, edge_type)) 

286 for edge_type in group_edge_types 

287 ] 

288 

289 has_edge = output.has_edge(*summary_graph_edge) 

290 if output.is_multigraph(): 

291 if not has_edge: 

292 for edge_type in edge_types: 

293 output.add_edge(*summary_graph_edge, **edge_type) 

294 elif not output.is_directed(): 

295 existing_edge_data = output.get_edge_data(*summary_graph_edge) 

296 for edge_type in edge_types: 

297 if edge_type not in existing_edge_data.values(): 

298 output.add_edge(*summary_graph_edge, **edge_type) 

299 else: 

300 superedge_attributes = {superedge_attribute: edge_types} 

301 output.add_edge(*summary_graph_edge, **superedge_attributes) 

302 

303 return output 

304 

305 

306def _snap_eligible_group(G, groups, group_lookup, edge_types): 

307 """ 

308 Determines if a group is eligible to be split. 

309 

310 A group is eligible to be split if all nodes in the group have edges of the same type(s) 

311 with the same other groups. 

312 

313 Parameters 

314 ---------- 

315 G: graph 

316 graph to be summarized 

317 groups: dict 

318 A dictionary of unique group IDs and their corresponding node groups 

319 group_lookup: dict 

320 dictionary of nodes and their current corresponding group ID 

321 edge_types: dict 

322 dictionary of edges in the graph and their corresponding attributes recognized 

323 in the summarization 

324 

325 Returns 

326 ------- 

327 tuple: group ID to split, and neighbor-groups participation_counts data structure 

328 """ 

329 neighbor_info = {node: {gid: Counter() for gid in groups} for node in group_lookup} 

330 for group_id in groups: 

331 current_group = groups[group_id] 

332 

333 # build neighbor_info for nodes in group 

334 for node in current_group: 

335 neighbor_info[node] = {group_id: Counter() for group_id in groups} 

336 edges = G.edges(node, keys=True) if G.is_multigraph() else G.edges(node) 

337 for edge in edges: 

338 neighbor = edge[1] 

339 edge_type = edge_types[edge] 

340 neighbor_group_id = group_lookup[neighbor] 

341 neighbor_info[node][neighbor_group_id][edge_type] += 1 

342 

343 # check if group_id is eligible to be split 

344 group_size = len(current_group) 

345 for other_group_id in groups: 

346 edge_counts = Counter() 

347 for node in current_group: 

348 edge_counts.update(neighbor_info[node][other_group_id].keys()) 

349 

350 if not all(count == group_size for count in edge_counts.values()): 

351 # only the neighbor_info of the returned group_id is required for handling group splits 

352 return group_id, neighbor_info 

353 

354 # if no eligible groups, complete neighbor_info is calculated 

355 return None, neighbor_info 

356 

357 

358def _snap_split(groups, neighbor_info, group_lookup, group_id): 

359 """ 

360 Splits a group based on edge types and updates the groups accordingly 

361 

362 Splits the group with the given group_id based on the edge types 

363 of the nodes so that each new grouping will all have the same 

364 edges with other nodes. 

365 

366 Parameters 

367 ---------- 

368 groups: dict 

369 A dictionary of unique group IDs and their corresponding node groups 

370 neighbor_info: dict 

371 A data structure indicating the number of edges a node has with the 

372 groups in the current summarization of each edge type 

373 edge_types: dict 

374 dictionary of edges in the graph and their corresponding attributes recognized 

375 in the summarization 

376 group_lookup: dict 

377 dictionary of nodes and their current corresponding group ID 

378 group_id: object 

379 ID of group to be split 

380 

381 Returns 

382 ------- 

383 dict 

384 The updated groups based on the split 

385 """ 

386 new_group_mappings = defaultdict(set) 

387 for node in groups[group_id]: 

388 signature = tuple( 

389 frozenset(edge_types) for edge_types in neighbor_info[node].values() 

390 ) 

391 new_group_mappings[signature].add(node) 

392 

393 # leave the biggest new_group as the original group 

394 new_groups = sorted(new_group_mappings.values(), key=len) 

395 for new_group in new_groups[:-1]: 

396 # Assign unused integer as the new_group_id 

397 # ids are tuples, so will not interact with the original group_ids 

398 new_group_id = len(groups) 

399 groups[new_group_id] = new_group 

400 groups[group_id] -= new_group 

401 for node in new_group: 

402 group_lookup[node] = new_group_id 

403 

404 return groups 

405 

406 

407@nx._dispatch(node_attrs="[node_attributes]", edge_attrs="[edge_attributes]") 

408def snap_aggregation( 

409 G, 

410 node_attributes, 

411 edge_attributes=(), 

412 prefix="Supernode-", 

413 supernode_attribute="group", 

414 superedge_attribute="types", 

415): 

416 """Creates a summary graph based on attributes and connectivity. 

417 

418 This function uses the Summarization by Grouping Nodes on Attributes 

419 and Pairwise edges (SNAP) algorithm for summarizing a given 

420 graph by grouping nodes by node attributes and their edge attributes 

421 into supernodes in a summary graph. This name SNAP should not be 

422 confused with the Stanford Network Analysis Project (SNAP). 

423 

424 Here is a high-level view of how this algorithm works: 

425 

426 1) Group nodes by node attribute values. 

427 

428 2) Iteratively split groups until all nodes in each group have edges 

429 to nodes in the same groups. That is, until all the groups are homogeneous 

430 in their member nodes' edges to other groups. For example, 

431 if all the nodes in group A only have edge to nodes in group B, then the 

432 group is homogeneous and does not need to be split. If all nodes in group B 

433 have edges with nodes in groups {A, C}, but some also have edges with other 

434 nodes in B, then group B is not homogeneous and needs to be split into 

435 groups have edges with {A, C} and a group of nodes having 

436 edges with {A, B, C}. This way, viewers of the summary graph can 

437 assume that all nodes in the group have the exact same node attributes and 

438 the exact same edges. 

439 

440 3) Build the output summary graph, where the groups are represented by 

441 super-nodes. Edges represent the edges shared between all the nodes in each 

442 respective groups. 

443 

444 A SNAP summary graph can be used to visualize graphs that are too large to display 

445 or visually analyze, or to efficiently identify sets of similar nodes with similar connectivity 

446 patterns to other sets of similar nodes based on specified node and/or edge attributes in a graph. 

447 

448 Parameters 

449 ---------- 

450 G: graph 

451 Networkx Graph to be summarized 

452 node_attributes: iterable, required 

453 An iterable of the node attributes used to group nodes in the summarization process. Nodes 

454 with the same values for these attributes will be grouped together in the summary graph. 

455 edge_attributes: iterable, optional 

456 An iterable of the edge attributes considered in the summarization process. If provided, unique 

457 combinations of the attribute values found in the graph are used to 

458 determine the edge types in the graph. If not provided, all edges 

459 are considered to be of the same type. 

460 prefix: str 

461 The prefix used to denote supernodes in the summary graph. Defaults to 'Supernode-'. 

462 supernode_attribute: str 

463 The node attribute for recording the supernode groupings of nodes. Defaults to 'group'. 

464 superedge_attribute: str 

465 The edge attribute for recording the edge types of multiple edges. Defaults to 'types'. 

466 

467 Returns 

468 ------- 

469 networkx.Graph: summary graph 

470 

471 Examples 

472 -------- 

473 SNAP aggregation takes a graph and summarizes it in the context of user-provided 

474 node and edge attributes such that a viewer can more easily extract and 

475 analyze the information represented by the graph 

476 

477 >>> nodes = { 

478 ... "A": dict(color="Red"), 

479 ... "B": dict(color="Red"), 

480 ... "C": dict(color="Red"), 

481 ... "D": dict(color="Red"), 

482 ... "E": dict(color="Blue"), 

483 ... "F": dict(color="Blue"), 

484 ... } 

485 >>> edges = [ 

486 ... ("A", "E", "Strong"), 

487 ... ("B", "F", "Strong"), 

488 ... ("C", "E", "Weak"), 

489 ... ("D", "F", "Weak"), 

490 ... ] 

491 >>> G = nx.Graph() 

492 >>> for node in nodes: 

493 ... attributes = nodes[node] 

494 ... G.add_node(node, **attributes) 

495 ... 

496 >>> for source, target, type in edges: 

497 ... G.add_edge(source, target, type=type) 

498 ... 

499 >>> node_attributes = ('color', ) 

500 >>> edge_attributes = ('type', ) 

501 >>> summary_graph = nx.snap_aggregation(G, node_attributes=node_attributes, edge_attributes=edge_attributes) 

502 

503 Notes 

504 ----- 

505 The summary graph produced is called a maximum Attribute-edge 

506 compatible (AR-compatible) grouping. According to [1]_, an 

507 AR-compatible grouping means that all nodes in each group have the same 

508 exact node attribute values and the same exact edges and 

509 edge types to one or more nodes in the same groups. The maximal 

510 AR-compatible grouping is the grouping with the minimal cardinality. 

511 

512 The AR-compatible grouping is the most detailed grouping provided by 

513 any of the SNAP algorithms. 

514 

515 References 

516 ---------- 

517 .. [1] Y. Tian, R. A. Hankins, and J. M. Patel. Efficient aggregation 

518 for graph summarization. In Proc. 2008 ACM-SIGMOD Int. Conf. 

519 Management of Data (SIGMOD’08), pages 567–580, Vancouver, Canada, 

520 June 2008. 

521 """ 

522 edge_types = { 

523 edge: tuple(attrs.get(attr) for attr in edge_attributes) 

524 for edge, attrs in G.edges.items() 

525 } 

526 if not G.is_directed(): 

527 if G.is_multigraph(): 

528 # list is needed to avoid mutating while iterating 

529 edges = [((v, u, k), etype) for (u, v, k), etype in edge_types.items()] 

530 else: 

531 # list is needed to avoid mutating while iterating 

532 edges = [((v, u), etype) for (u, v), etype in edge_types.items()] 

533 edge_types.update(edges) 

534 

535 group_lookup = { 

536 node: tuple(attrs[attr] for attr in node_attributes) 

537 for node, attrs in G.nodes.items() 

538 } 

539 groups = defaultdict(set) 

540 for node, node_type in group_lookup.items(): 

541 groups[node_type].add(node) 

542 

543 eligible_group_id, neighbor_info = _snap_eligible_group( 

544 G, groups, group_lookup, edge_types 

545 ) 

546 while eligible_group_id: 

547 groups = _snap_split(groups, neighbor_info, group_lookup, eligible_group_id) 

548 eligible_group_id, neighbor_info = _snap_eligible_group( 

549 G, groups, group_lookup, edge_types 

550 ) 

551 return _snap_build_graph( 

552 G, 

553 groups, 

554 node_attributes, 

555 edge_attributes, 

556 neighbor_info, 

557 edge_types, 

558 prefix, 

559 supernode_attribute, 

560 superedge_attribute, 

561 )