Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.11/site-packages/networkx/algorithms/summarization.py: 9%

Shortcuts on this page

r m x   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

121 statements  

1""" 

2Graph summarization finds smaller representations of graphs resulting in faster 

3runtime of algorithms, reduced storage needs, and noise reduction. 

4Summarization has applications in areas such as visualization, pattern mining, 

5clustering and community detection, and more. Core graph summarization 

6techniques are grouping/aggregation, bit-compression, 

7simplification/sparsification, and influence based. Graph summarization 

8algorithms often produce either summary graphs in the form of supergraphs or 

9sparsified graphs, or a list of independent structures. Supergraphs are the 

10most common product, which consist of supernodes and original nodes and are 

11connected by edges and superedges, which represent aggregate edges between 

12nodes and supernodes. 

13 

14Grouping/aggregation based techniques compress graphs by representing 

15close/connected nodes and edges in a graph by a single node/edge in a 

16supergraph. Nodes can be grouped together into supernodes based on their 

17structural similarities or proximity within a graph to reduce the total number 

18of nodes in a graph. Edge-grouping techniques group edges into lossy/lossless 

19nodes called compressor or virtual nodes to reduce the total number of edges in 

20a graph. Edge-grouping techniques can be lossless, meaning that they can be 

21used to re-create the original graph, or techniques can be lossy, requiring 

22less space to store the summary graph, but at the expense of lower 

23reconstruction accuracy of the original graph. 

24 

25Bit-compression techniques minimize the amount of information needed to 

26describe the original graph, while revealing structural patterns in the 

27original graph. The two-part minimum description length (MDL) is often used to 

28represent the model and the original graph in terms of the model. A key 

29difference between graph compression and graph summarization is that graph 

30summarization focuses on finding structural patterns within the original graph, 

31whereas graph compression focuses on compressions the original graph to be as 

32small as possible. **NOTE**: Some bit-compression methods exist solely to 

33compress a graph without creating a summary graph or finding comprehensible 

34structural patterns. 

35 

36Simplification/Sparsification techniques attempt to create a sparse 

37representation of a graph by removing unimportant nodes and edges from the 

38graph. Sparsified graphs differ from supergraphs created by 

39grouping/aggregation by only containing a subset of the original nodes and 

40edges of the original graph. 

41 

42Influence based techniques aim to find a high-level description of influence 

43propagation in a large graph. These methods are scarce and have been mostly 

44applied to social graphs. 

45 

46*dedensification* is a grouping/aggregation based technique to compress the 

47neighborhoods around high-degree nodes in unweighted graphs by adding 

48compressor nodes that summarize multiple edges of the same type to 

49high-degree nodes (nodes with a degree greater than a given threshold). 

50Dedensification was developed for the purpose of increasing performance of 

51query processing around high-degree nodes in graph databases and enables direct 

52operations on the compressed graph. The structural patterns surrounding 

53high-degree nodes in the original is preserved while using fewer edges and 

54adding a small number of compressor nodes. The degree of nodes present in the 

55original graph is also preserved. The current implementation of dedensification 

56supports graphs with one edge type. 

57 

58For more information on graph summarization, see `Graph Summarization Methods 

59and Applications: A Survey <https://dl.acm.org/doi/abs/10.1145/3186727>`_ 

60""" 

61 

62from collections import Counter, defaultdict 

63 

64import networkx as nx 

65 

66__all__ = ["dedensify", "snap_aggregation"] 

67 

68 

69@nx._dispatchable(mutates_input={"not copy": 3}, returns_graph=True) 

70def dedensify(G, threshold, prefix=None, copy=True): 

71 """Compresses neighborhoods around high-degree nodes 

72 

73 Reduces the number of edges to high-degree nodes by adding compressor nodes 

74 that summarize multiple edges of the same type to high-degree nodes (nodes 

75 with a degree greater than a given threshold). Dedensification also has 

76 the added benefit of reducing the number of edges around high-degree nodes. 

77 The implementation currently supports graphs with a single edge type. 

78 

79 Parameters 

80 ---------- 

81 G: graph 

82 A networkx graph 

83 threshold: int 

84 Minimum degree threshold of a node to be considered a high degree node. 

85 The threshold must be greater than or equal to 2. 

86 prefix: str or None, optional (default: None) 

87 An optional prefix for denoting compressor nodes 

88 copy: bool, optional (default: True) 

89 Indicates if dedensification should be done inplace 

90 

91 Returns 

92 ------- 

93 dedensified networkx graph : (graph, set) 

94 2-tuple of the dedensified graph and set of compressor nodes 

95 

96 Notes 

97 ----- 

98 According to the algorithm in [1]_, removes edges in a graph by 

99 compressing/decompressing the neighborhoods around high degree nodes by 

100 adding compressor nodes that summarize multiple edges of the same type 

101 to high-degree nodes. Dedensification will only add a compressor node when 

102 doing so will reduce the total number of edges in the given graph. This 

103 implementation currently supports graphs with a single edge type. 

104 

105 Examples 

106 -------- 

107 Dedensification will only add compressor nodes when doing so would result 

108 in fewer edges:: 

109 

110 >>> original_graph = nx.DiGraph() 

111 >>> original_graph.add_nodes_from( 

112 ... ["1", "2", "3", "4", "5", "6", "A", "B", "C"] 

113 ... ) 

114 >>> original_graph.add_edges_from( 

115 ... [ 

116 ... ("1", "C"), ("1", "B"), 

117 ... ("2", "C"), ("2", "B"), ("2", "A"), 

118 ... ("3", "B"), ("3", "A"), ("3", "6"), 

119 ... ("4", "C"), ("4", "B"), ("4", "A"), 

120 ... ("5", "B"), ("5", "A"), 

121 ... ("6", "5"), 

122 ... ("A", "6") 

123 ... ] 

124 ... ) 

125 >>> c_graph, c_nodes = nx.dedensify(original_graph, threshold=2) 

126 >>> original_graph.number_of_edges() 

127 15 

128 >>> c_graph.number_of_edges() 

129 14 

130 

131 A dedensified, directed graph can be "densified" to reconstruct the 

132 original graph:: 

133 

134 >>> original_graph = nx.DiGraph() 

135 >>> original_graph.add_nodes_from( 

136 ... ["1", "2", "3", "4", "5", "6", "A", "B", "C"] 

137 ... ) 

138 >>> original_graph.add_edges_from( 

139 ... [ 

140 ... ("1", "C"), ("1", "B"), 

141 ... ("2", "C"), ("2", "B"), ("2", "A"), 

142 ... ("3", "B"), ("3", "A"), ("3", "6"), 

143 ... ("4", "C"), ("4", "B"), ("4", "A"), 

144 ... ("5", "B"), ("5", "A"), 

145 ... ("6", "5"), 

146 ... ("A", "6") 

147 ... ] 

148 ... ) 

149 >>> c_graph, c_nodes = nx.dedensify(original_graph, threshold=2) 

150 >>> # re-densifies the compressed graph into the original graph 

151 >>> for c_node in c_nodes: 

152 ... all_neighbors = set(nx.all_neighbors(c_graph, c_node)) 

153 ... out_neighbors = set(c_graph.neighbors(c_node)) 

154 ... for out_neighbor in out_neighbors: 

155 ... c_graph.remove_edge(c_node, out_neighbor) 

156 ... in_neighbors = all_neighbors - out_neighbors 

157 ... for in_neighbor in in_neighbors: 

158 ... c_graph.remove_edge(in_neighbor, c_node) 

159 ... for out_neighbor in out_neighbors: 

160 ... c_graph.add_edge(in_neighbor, out_neighbor) 

161 ... c_graph.remove_node(c_node) 

162 ... 

163 >>> nx.is_isomorphic(original_graph, c_graph) 

164 True 

165 

166 References 

167 ---------- 

168 .. [1] Maccioni, A., & Abadi, D. J. (2016, August). 

169 Scalable pattern matching over compressed graphs via dedensification. 

170 In Proceedings of the 22nd ACM SIGKDD International Conference on 

171 Knowledge Discovery and Data Mining (pp. 1755-1764). 

172 http://www.cs.umd.edu/~abadi/papers/graph-dedense.pdf 

173 """ 

174 if threshold < 2: 

175 raise nx.NetworkXError("The degree threshold must be >= 2") 

176 

177 degrees = G.in_degree if G.is_directed() else G.degree 

178 # Group nodes based on degree threshold 

179 high_degree_nodes = {n for n, d in degrees if d > threshold} 

180 low_degree_nodes = G.nodes() - high_degree_nodes 

181 

182 auxiliary = {} 

183 for node in G: 

184 high_degree_nbrs = frozenset(high_degree_nodes & set(G[node])) 

185 if high_degree_nbrs: 

186 if high_degree_nbrs in auxiliary: 

187 auxiliary[high_degree_nbrs].add(node) 

188 else: 

189 auxiliary[high_degree_nbrs] = {node} 

190 

191 if copy: 

192 G = G.copy() 

193 

194 compressor_nodes = set() 

195 for index, (high_degree_nodes, low_degree_nodes) in enumerate(auxiliary.items()): 

196 low_degree_node_count = len(low_degree_nodes) 

197 high_degree_node_count = len(high_degree_nodes) 

198 old_edges = high_degree_node_count * low_degree_node_count 

199 new_edges = high_degree_node_count + low_degree_node_count 

200 if old_edges <= new_edges: 

201 continue 

202 compression_node = "".join(str(node) for node in high_degree_nodes) 

203 if prefix: 

204 compression_node = str(prefix) + compression_node 

205 for node in low_degree_nodes: 

206 for high_node in high_degree_nodes: 

207 if G.has_edge(node, high_node): 

208 G.remove_edge(node, high_node) 

209 

210 G.add_edge(node, compression_node) 

211 for node in high_degree_nodes: 

212 G.add_edge(compression_node, node) 

213 compressor_nodes.add(compression_node) 

214 return G, compressor_nodes 

215 

216 

217def _snap_build_graph( 

218 G, 

219 groups, 

220 node_attributes, 

221 edge_attributes, 

222 neighbor_info, 

223 edge_types, 

224 prefix, 

225 supernode_attribute, 

226 superedge_attribute, 

227): 

228 """ 

229 Build the summary graph from the data structures produced in the SNAP aggregation algorithm 

230 

231 Used in the SNAP aggregation algorithm to build the output summary graph and supernode 

232 lookup dictionary. This process uses the original graph and the data structures to 

233 create the supernodes with the correct node attributes, and the superedges with the correct 

234 edge attributes 

235 

236 Parameters 

237 ---------- 

238 G: networkx.Graph 

239 the original graph to be summarized 

240 groups: dict 

241 A dictionary of unique group IDs and their corresponding node groups 

242 node_attributes: iterable 

243 An iterable of the node attributes considered in the summarization process 

244 edge_attributes: iterable 

245 An iterable of the edge attributes considered in the summarization process 

246 neighbor_info: dict 

247 A data structure indicating the number of edges a node has with the 

248 groups in the current summarization of each edge type 

249 edge_types: dict 

250 dictionary of edges in the graph and their corresponding attributes recognized 

251 in the summarization 

252 prefix: string 

253 The prefix to be added to all supernodes 

254 supernode_attribute: str 

255 The node attribute for recording the supernode groupings of nodes 

256 superedge_attribute: str 

257 The edge attribute for recording the edge types represented by superedges 

258 

259 Returns 

260 ------- 

261 summary graph: Networkx graph 

262 """ 

263 output = G.__class__() 

264 node_label_lookup = {} 

265 for index, group_id in enumerate(groups): 

266 group_set = groups[group_id] 

267 supernode = f"{prefix}{index}" 

268 node_label_lookup[group_id] = supernode 

269 supernode_attributes = { 

270 attr: G.nodes[next(iter(group_set))][attr] for attr in node_attributes 

271 } 

272 supernode_attributes[supernode_attribute] = group_set 

273 output.add_node(supernode, **supernode_attributes) 

274 

275 for group_id in groups: 

276 group_set = groups[group_id] 

277 source_supernode = node_label_lookup[group_id] 

278 for other_group, group_edge_types in neighbor_info[ 

279 next(iter(group_set)) 

280 ].items(): 

281 if group_edge_types: 

282 target_supernode = node_label_lookup[other_group] 

283 summary_graph_edge = (source_supernode, target_supernode) 

284 

285 edge_types = [ 

286 dict(zip(edge_attributes, edge_type)) 

287 for edge_type in group_edge_types 

288 ] 

289 

290 has_edge = output.has_edge(*summary_graph_edge) 

291 if output.is_multigraph(): 

292 if not has_edge: 

293 for edge_type in edge_types: 

294 output.add_edge(*summary_graph_edge, **edge_type) 

295 elif not output.is_directed(): 

296 existing_edge_data = output.get_edge_data(*summary_graph_edge) 

297 for edge_type in edge_types: 

298 if edge_type not in existing_edge_data.values(): 

299 output.add_edge(*summary_graph_edge, **edge_type) 

300 else: 

301 superedge_attributes = {superedge_attribute: edge_types} 

302 output.add_edge(*summary_graph_edge, **superedge_attributes) 

303 

304 return output 

305 

306 

307def _snap_eligible_group(G, groups, group_lookup, edge_types): 

308 """ 

309 Determines if a group is eligible to be split. 

310 

311 A group is eligible to be split if all nodes in the group have edges of the same type(s) 

312 with the same other groups. 

313 

314 Parameters 

315 ---------- 

316 G: graph 

317 graph to be summarized 

318 groups: dict 

319 A dictionary of unique group IDs and their corresponding node groups 

320 group_lookup: dict 

321 dictionary of nodes and their current corresponding group ID 

322 edge_types: dict 

323 dictionary of edges in the graph and their corresponding attributes recognized 

324 in the summarization 

325 

326 Returns 

327 ------- 

328 tuple: group ID to split, and neighbor-groups participation_counts data structure 

329 """ 

330 nbr_info = {node: {gid: Counter() for gid in groups} for node in group_lookup} 

331 for group_id in groups: 

332 current_group = groups[group_id] 

333 

334 # build nbr_info for nodes in group 

335 for node in current_group: 

336 nbr_info[node] = {group_id: Counter() for group_id in groups} 

337 edges = G.edges(node, keys=True) if G.is_multigraph() else G.edges(node) 

338 for edge in edges: 

339 neighbor = edge[1] 

340 edge_type = edge_types[edge] 

341 neighbor_group_id = group_lookup[neighbor] 

342 nbr_info[node][neighbor_group_id][edge_type] += 1 

343 

344 # check if group_id is eligible to be split 

345 group_size = len(current_group) 

346 for other_group_id in groups: 

347 edge_counts = Counter() 

348 for node in current_group: 

349 edge_counts.update(nbr_info[node][other_group_id].keys()) 

350 

351 if not all(count == group_size for count in edge_counts.values()): 

352 # only the nbr_info of the returned group_id is required for handling group splits 

353 return group_id, nbr_info 

354 

355 # if no eligible groups, complete nbr_info is calculated 

356 return None, nbr_info 

357 

358 

359def _snap_split(groups, neighbor_info, group_lookup, group_id): 

360 """ 

361 Splits a group based on edge types and updates the groups accordingly 

362 

363 Splits the group with the given group_id based on the edge types 

364 of the nodes so that each new grouping will all have the same 

365 edges with other nodes. 

366 

367 Parameters 

368 ---------- 

369 groups: dict 

370 A dictionary of unique group IDs and their corresponding node groups 

371 neighbor_info: dict 

372 A data structure indicating the number of edges a node has with the 

373 groups in the current summarization of each edge type 

374 edge_types: dict 

375 dictionary of edges in the graph and their corresponding attributes recognized 

376 in the summarization 

377 group_lookup: dict 

378 dictionary of nodes and their current corresponding group ID 

379 group_id: object 

380 ID of group to be split 

381 

382 Returns 

383 ------- 

384 dict 

385 The updated groups based on the split 

386 """ 

387 new_group_mappings = defaultdict(set) 

388 for node in groups[group_id]: 

389 signature = tuple( 

390 frozenset(edge_types) for edge_types in neighbor_info[node].values() 

391 ) 

392 new_group_mappings[signature].add(node) 

393 

394 # leave the biggest new_group as the original group 

395 new_groups = sorted(new_group_mappings.values(), key=len) 

396 for new_group in new_groups[:-1]: 

397 # Assign unused integer as the new_group_id 

398 # ids are tuples, so will not interact with the original group_ids 

399 new_group_id = len(groups) 

400 groups[new_group_id] = new_group 

401 groups[group_id] -= new_group 

402 for node in new_group: 

403 group_lookup[node] = new_group_id 

404 

405 return groups 

406 

407 

408@nx._dispatchable( 

409 node_attrs="[node_attributes]", edge_attrs="[edge_attributes]", returns_graph=True 

410) 

411def snap_aggregation( 

412 G, 

413 node_attributes, 

414 edge_attributes=(), 

415 prefix="Supernode-", 

416 supernode_attribute="group", 

417 superedge_attribute="types", 

418): 

419 """Creates a summary graph based on attributes and connectivity. 

420 

421 This function uses the Summarization by Grouping Nodes on Attributes 

422 and Pairwise edges (SNAP) algorithm for summarizing a given 

423 graph by grouping nodes by node attributes and their edge attributes 

424 into supernodes in a summary graph. This name SNAP should not be 

425 confused with the Stanford Network Analysis Project (SNAP). 

426 

427 Here is a high-level view of how this algorithm works: 

428 

429 1) Group nodes by node attribute values. 

430 

431 2) Iteratively split groups until all nodes in each group have edges 

432 to nodes in the same groups. That is, until all the groups are homogeneous 

433 in their member nodes' edges to other groups. For example, 

434 if all the nodes in group A only have edge to nodes in group B, then the 

435 group is homogeneous and does not need to be split. If all nodes in group B 

436 have edges with nodes in groups {A, C}, but some also have edges with other 

437 nodes in B, then group B is not homogeneous and needs to be split into 

438 groups have edges with {A, C} and a group of nodes having 

439 edges with {A, B, C}. This way, viewers of the summary graph can 

440 assume that all nodes in the group have the exact same node attributes and 

441 the exact same edges. 

442 

443 3) Build the output summary graph, where the groups are represented by 

444 super-nodes. Edges represent the edges shared between all the nodes in each 

445 respective groups. 

446 

447 A SNAP summary graph can be used to visualize graphs that are too large to display 

448 or visually analyze, or to efficiently identify sets of similar nodes with similar connectivity 

449 patterns to other sets of similar nodes based on specified node and/or edge attributes in a graph. 

450 

451 Parameters 

452 ---------- 

453 G: graph 

454 Networkx Graph to be summarized 

455 node_attributes: iterable, required 

456 An iterable of the node attributes used to group nodes in the summarization process. Nodes 

457 with the same values for these attributes will be grouped together in the summary graph. 

458 edge_attributes: iterable, optional 

459 An iterable of the edge attributes considered in the summarization process. If provided, unique 

460 combinations of the attribute values found in the graph are used to 

461 determine the edge types in the graph. If not provided, all edges 

462 are considered to be of the same type. 

463 prefix: str 

464 The prefix used to denote supernodes in the summary graph. Defaults to 'Supernode-'. 

465 supernode_attribute: str 

466 The node attribute for recording the supernode groupings of nodes. Defaults to 'group'. 

467 superedge_attribute: str 

468 The edge attribute for recording the edge types of multiple edges. Defaults to 'types'. 

469 

470 Returns 

471 ------- 

472 networkx.Graph: summary graph 

473 

474 Examples 

475 -------- 

476 SNAP aggregation takes a graph and summarizes it in the context of user-provided 

477 node and edge attributes such that a viewer can more easily extract and 

478 analyze the information represented by the graph 

479 

480 >>> nodes = { 

481 ... "A": dict(color="Red"), 

482 ... "B": dict(color="Red"), 

483 ... "C": dict(color="Red"), 

484 ... "D": dict(color="Red"), 

485 ... "E": dict(color="Blue"), 

486 ... "F": dict(color="Blue"), 

487 ... } 

488 >>> edges = [ 

489 ... ("A", "E", "Strong"), 

490 ... ("B", "F", "Strong"), 

491 ... ("C", "E", "Weak"), 

492 ... ("D", "F", "Weak"), 

493 ... ] 

494 >>> G = nx.Graph() 

495 >>> for node in nodes: 

496 ... attributes = nodes[node] 

497 ... G.add_node(node, **attributes) 

498 >>> for source, target, type in edges: 

499 ... G.add_edge(source, target, type=type) 

500 >>> node_attributes = ("color",) 

501 >>> edge_attributes = ("type",) 

502 >>> summary_graph = nx.snap_aggregation( 

503 ... G, node_attributes=node_attributes, edge_attributes=edge_attributes 

504 ... ) 

505 

506 Notes 

507 ----- 

508 The summary graph produced is called a maximum Attribute-edge 

509 compatible (AR-compatible) grouping. According to [1]_, an 

510 AR-compatible grouping means that all nodes in each group have the same 

511 exact node attribute values and the same exact edges and 

512 edge types to one or more nodes in the same groups. The maximal 

513 AR-compatible grouping is the grouping with the minimal cardinality. 

514 

515 The AR-compatible grouping is the most detailed grouping provided by 

516 any of the SNAP algorithms. 

517 

518 References 

519 ---------- 

520 .. [1] Y. Tian, R. A. Hankins, and J. M. Patel. Efficient aggregation 

521 for graph summarization. In Proc. 2008 ACM-SIGMOD Int. Conf. 

522 Management of Data (SIGMOD’08), pages 567–580, Vancouver, Canada, 

523 June 2008. 

524 """ 

525 edge_types = { 

526 edge: tuple(attrs.get(attr) for attr in edge_attributes) 

527 for edge, attrs in G.edges.items() 

528 } 

529 if not G.is_directed(): 

530 if G.is_multigraph(): 

531 # list is needed to avoid mutating while iterating 

532 edges = [((v, u, k), etype) for (u, v, k), etype in edge_types.items()] 

533 else: 

534 # list is needed to avoid mutating while iterating 

535 edges = [((v, u), etype) for (u, v), etype in edge_types.items()] 

536 edge_types.update(edges) 

537 

538 group_lookup = { 

539 node: tuple(attrs[attr] for attr in node_attributes) 

540 for node, attrs in G.nodes.items() 

541 } 

542 groups = defaultdict(set) 

543 for node, node_type in group_lookup.items(): 

544 groups[node_type].add(node) 

545 

546 eligible_group_id, nbr_info = _snap_eligible_group( 

547 G, groups, group_lookup, edge_types 

548 ) 

549 while eligible_group_id: 

550 groups = _snap_split(groups, nbr_info, group_lookup, eligible_group_id) 

551 eligible_group_id, nbr_info = _snap_eligible_group( 

552 G, groups, group_lookup, edge_types 

553 ) 

554 return _snap_build_graph( 

555 G, 

556 groups, 

557 node_attributes, 

558 edge_attributes, 

559 nbr_info, 

560 edge_types, 

561 prefix, 

562 supernode_attribute, 

563 superedge_attribute, 

564 )