Skip to content

Commit

Permalink
build all docs
Browse files Browse the repository at this point in the history
  • Loading branch information
pyynb committed Aug 27, 2024
1 parent 297a5fc commit 1c4b7a0
Show file tree
Hide file tree
Showing 11,093 changed files with 2,555,906 additions and 24,637 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
Binary file not shown.
Binary file modified _downloads/11500a931a1c5f9e708dfb4dfc465efe/2_dglgraph.zip
Binary file not shown.
Binary file modified _downloads/1903c142e91864df519198e1fee658e8/6_load_data.zip
Binary file not shown.
Binary file not shown.
Binary file modified _downloads/36091057899f35ff6b2929c3f090d1b1/1_gcn.zip
Binary file not shown.
Binary file not shown.
Binary file modified _downloads/40a3cadd8f5b33a55c8d4b43619eeead/4_rgcn.zip
Binary file not shown.
Binary file not shown.
Binary file modified _downloads/59aab1379739241d2fa4280f1e7f7b0d/4_link_predict.zip
Binary file not shown.
Binary file modified _downloads/7746d538aab61caed46c14c3df400eaa/1_introduction.zip
Binary file not shown.
Binary file modified _downloads/8db7ffb9a0985efae446d8584cf63f1d/3_tree-lstm.zip
Binary file not shown.
Binary file modified _downloads/99b4e83db6fff7f810348a10ace8cb1a/7_transformer.zip
Binary file not shown.
Binary file modified _downloads/acfa3347f63d993639b037e39bcaab21/2_capsule.zip
Binary file not shown.
Binary file modified _downloads/b3bc1d9d825616020677599977e5c38a/argo_tutorial.zip
Binary file not shown.
Binary file modified _downloads/d2f7e5b2b2b0e6c91385bc9507ee6cd4/9_gat.zip
Binary file not shown.
Binary file modified _downloads/de21d2a2463df90e341f4e750a5dd0bc/6_line_graph.zip
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified _downloads/efd5ed97d4d759d5bbbf4ce4ecb2a6dc/5_dgmg.zip
Binary file not shown.
Binary file modified _images/sphx_glr_6_line_graph_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_6_line_graph_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_6_line_graph_003.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_6_line_graph_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 4 additions & 1 deletion _modules/dgl/graphbolt/base.html
Original file line number Diff line number Diff line change
Expand Up @@ -311,6 +311,8 @@ <h1>Source code for dgl.graphbolt.base</h1><div class="highlight"><pre>
<span class="k">return</span> <span class="n">indptr</span><span class="o">.</span><span class="n">new_empty</span><span class="p">(</span><span class="n">output_size</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">dtype</span><span class="p">)</span>


<div class="viewcode-block" id="indptr_edge_ids">
<a class="viewcode-back" href="../../../generated/dgl.graphbolt.indptr_edge_ids.html#dgl.graphbolt.indptr_edge_ids">[docs]</a>
<span class="k">def</span> <span class="nf">indptr_edge_ids</span><span class="p">(</span><span class="n">indptr</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">offset</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">output_size</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""Converts a given indptr offset tensor to a COO format tensor for the edge</span>
<span class="sd"> ids. For a given indptr [0, 2, 5, 7] and offset tensor [0, 100, 200], the</span>
Expand Down Expand Up @@ -341,7 +343,8 @@ <h1>Source code for dgl.graphbolt.base</h1><div class="highlight"><pre>
<span class="n">dtype</span> <span class="o">=</span> <span class="n">offset</span><span class="o">.</span><span class="n">dtype</span>
<span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">ops</span><span class="o">.</span><span class="n">graphbolt</span><span class="o">.</span><span class="n">indptr_edge_ids</span><span class="p">(</span>
<span class="n">indptr</span><span class="p">,</span> <span class="n">dtype</span><span class="p">,</span> <span class="n">offset</span><span class="p">,</span> <span class="n">output_size</span>
<span class="p">)</span>
<span class="p">)</span></div>



<div class="viewcode-block" id="index_select">
Expand Down
55 changes: 27 additions & 28 deletions _modules/dgl/graphbolt/dataloader.html
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,6 @@ <h1>Source code for dgl.graphbolt.dataloader</h1><div class="highlight"><pre>
<span class="kn">import</span> <span class="nn">torch.utils.data</span> <span class="k">as</span> <span class="nn">torch_data</span>

<span class="kn">from</span> <span class="nn">.base</span> <span class="kn">import</span> <span class="n">CopyTo</span>

<span class="kn">from</span> <span class="nn">.datapipes</span> <span class="kn">import</span> <span class="p">(</span>
<span class="n">datapipe_graph_to_adjlist</span><span class="p">,</span>
<span class="n">find_dps</span><span class="p">,</span>
Expand All @@ -138,6 +137,7 @@ <h1>Source code for dgl.graphbolt.dataloader</h1><div class="highlight"><pre>
<span class="kn">from</span> <span class="nn">.impl.neighbor_sampler</span> <span class="kn">import</span> <span class="n">SamplePerLayer</span>
<span class="kn">from</span> <span class="nn">.internal_utils</span> <span class="kn">import</span> <span class="n">gb_warning</span>
<span class="kn">from</span> <span class="nn">.item_sampler</span> <span class="kn">import</span> <span class="n">ItemSampler</span>
<span class="kn">from</span> <span class="nn">.minibatch_transformer</span> <span class="kn">import</span> <span class="n">MiniBatchTransformer</span>


<span class="n">__all__</span> <span class="o">=</span> <span class="p">[</span>
Expand Down Expand Up @@ -200,7 +200,7 @@ <h1>Source code for dgl.graphbolt.dataloader</h1><div class="highlight"><pre>

<div class="viewcode-block" id="DataLoader">
<a class="viewcode-back" href="../../../generated/dgl.graphbolt.DataLoader.html#dgl.graphbolt.DataLoader">[docs]</a>
<span class="k">class</span> <span class="nc">DataLoader</span><span class="p">(</span><span class="n">torch_data</span><span class="o">.</span><span class="n">DataLoader</span><span class="p">):</span>
<span class="k">class</span> <span class="nc">DataLoader</span><span class="p">(</span><span class="n">MiniBatchTransformer</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""Multiprocessing DataLoader.</span>

<span class="sd"> Iterates over the data pipeline with everything before feature fetching</span>
Expand Down Expand Up @@ -247,32 +247,33 @@ <h1>Source code for dgl.graphbolt.dataloader</h1><div class="highlight"><pre>
<span class="n">datapipe</span> <span class="o">=</span> <span class="n">datapipe</span><span class="o">.</span><span class="n">mark_end</span><span class="p">()</span>
<span class="n">datapipe_graph</span> <span class="o">=</span> <span class="n">traverse_dps</span><span class="p">(</span><span class="n">datapipe</span><span class="p">)</span>

<span class="c1"># (1) Insert minibatch distribution.</span>
<span class="c1"># TODO(BarclayII): Currently I'm using sharding_filter() as a</span>
<span class="c1"># concept demonstration. Later on minibatch distribution should be</span>
<span class="c1"># merged into ItemSampler to maximize efficiency.</span>
<span class="n">item_samplers</span> <span class="o">=</span> <span class="n">find_dps</span><span class="p">(</span>
<span class="n">datapipe_graph</span><span class="p">,</span>
<span class="n">ItemSampler</span><span class="p">,</span>
<span class="p">)</span>
<span class="k">for</span> <span class="n">item_sampler</span> <span class="ow">in</span> <span class="n">item_samplers</span><span class="p">:</span>
<span class="n">datapipe_graph</span> <span class="o">=</span> <span class="n">replace_dp</span><span class="p">(</span>
<span class="k">if</span> <span class="n">num_workers</span> <span class="o">&gt;</span> <span class="mi">0</span><span class="p">:</span>
<span class="c1"># (1) Insert minibatch distribution.</span>
<span class="c1"># TODO(BarclayII): Currently I'm using sharding_filter() as a</span>
<span class="c1"># concept demonstration. Later on minibatch distribution should be</span>
<span class="c1"># merged into ItemSampler to maximize efficiency.</span>
<span class="n">item_samplers</span> <span class="o">=</span> <span class="n">find_dps</span><span class="p">(</span>
<span class="n">datapipe_graph</span><span class="p">,</span>
<span class="n">item_sampler</span><span class="p">,</span>
<span class="n">item_sampler</span><span class="o">.</span><span class="n">sharding_filter</span><span class="p">(),</span>
<span class="n">ItemSampler</span><span class="p">,</span>
<span class="p">)</span>
<span class="k">for</span> <span class="n">item_sampler</span> <span class="ow">in</span> <span class="n">item_samplers</span><span class="p">:</span>
<span class="n">datapipe_graph</span> <span class="o">=</span> <span class="n">replace_dp</span><span class="p">(</span>
<span class="n">datapipe_graph</span><span class="p">,</span>
<span class="n">item_sampler</span><span class="p">,</span>
<span class="n">item_sampler</span><span class="o">.</span><span class="n">sharding_filter</span><span class="p">(),</span>
<span class="p">)</span>

<span class="c1"># (2) Cut datapipe at FeatureFetcher and wrap.</span>
<span class="n">datapipe_graph</span> <span class="o">=</span> <span class="n">_find_and_wrap_parent</span><span class="p">(</span>
<span class="n">datapipe_graph</span><span class="p">,</span>
<span class="n">FeatureFetcherStartMarker</span><span class="p">,</span>
<span class="n">MultiprocessingWrapper</span><span class="p">,</span>
<span class="n">num_workers</span><span class="o">=</span><span class="n">num_workers</span><span class="p">,</span>
<span class="n">persistent_workers</span><span class="o">=</span><span class="n">persistent_workers</span><span class="p">,</span>
<span class="p">)</span>

<span class="c1"># (2) Cut datapipe at FeatureFetcher and wrap.</span>
<span class="n">datapipe_graph</span> <span class="o">=</span> <span class="n">_find_and_wrap_parent</span><span class="p">(</span>
<span class="n">datapipe_graph</span><span class="p">,</span>
<span class="n">FeatureFetcherStartMarker</span><span class="p">,</span>
<span class="n">MultiprocessingWrapper</span><span class="p">,</span>
<span class="n">num_workers</span><span class="o">=</span><span class="n">num_workers</span><span class="p">,</span>
<span class="n">persistent_workers</span><span class="o">=</span><span class="n">persistent_workers</span><span class="p">,</span>
<span class="p">)</span>

<span class="c1"># (3) Limit the number of UVA threads used if the feature_fetcher has</span>
<span class="c1"># overlapping optimization enabled.</span>
<span class="c1"># (3) Limit the number of UVA threads used if the feature_fetcher</span>
<span class="c1"># or any of the samplers have overlapping optimization enabled.</span>
<span class="k">if</span> <span class="n">num_workers</span> <span class="o">==</span> <span class="mi">0</span> <span class="ow">and</span> <span class="n">torch</span><span class="o">.</span><span class="n">cuda</span><span class="o">.</span><span class="n">is_available</span><span class="p">():</span>
<span class="n">feature_fetchers</span> <span class="o">=</span> <span class="n">find_dps</span><span class="p">(</span>
<span class="n">datapipe_graph</span><span class="p">,</span>
Expand Down Expand Up @@ -312,9 +313,7 @@ <h1>Source code for dgl.graphbolt.dataloader</h1><div class="highlight"><pre>
<span class="p">),</span>
<span class="p">)</span>

<span class="c1"># The stages after feature fetching is still done in the main process.</span>
<span class="c1"># So we set num_workers to 0 here.</span>
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="n">datapipe</span><span class="p">,</span> <span class="n">batch_size</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">num_workers</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span></div>
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="n">datapipe</span><span class="p">)</span></div>

</pre></div>
</div>
Expand Down
Loading

0 comments on commit 1c4b7a0

Please sign in to comment.