This is the blog section. It has two categories: News and Releases.
Files in these directories will be listed in reverse chronological order.
diff --git a/blog/2018/01/04/another-great-release/index.html b/blog/2018/01/04/another-great-release/index.html index 09cc2856c..40f4a96c3 100644 --- a/blog/2018/01/04/another-great-release/index.html +++ b/blog/2018/01/04/another-great-release/index.html @@ -11,10 +11,10 @@ Print entire section
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito occumbere, oves novem gestit haerebat frena; qui. Respicit recurvam erat: pignora hinc reppulit nos aut, aptos, ipsa.
Meae optatos passa est Epiros utiliter Talibus niveis, hoc lata, edidit. -Dixi ad aestum.
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is a code block following a header.
-
What | Follows |
---|---|
A table | A header |
A table | A header |
A table | A header |
There’s a horizontal rule above and below this.
Here is an unordered list:
And an ordered list:
And an unordered task list:
And a “mixed” task list:
And a nested list:
Definition lists can be used with Markdown syntax. Definition terms are bold.
Tables should have bold headings and alternating shaded rows.
Artist | Album | Year |
---|---|---|
Michael Jackson | Thriller | 1982 |
Prince | Purple Rain | 1984 |
Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should scroll horizontally.
Artist | Album | Year | Label | Awards | Songs |
---|---|---|---|---|---|
Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain |
Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
-bar := "foo";
-
Code can also use syntax highlighting.
func main() {
+Dixi ad aestum.Header 2
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Header 3
This is a code block following a header.
+
Header 4
- This is an unordered list following a header.
- This is an unordered list following a header.
- This is an unordered list following a header.
Header 5
- This is an ordered list following a header.
- This is an ordered list following a header.
- This is an ordered list following a header.
Header 6
What Follows A table A header A table A header A table A header
There’s a horizontal rule above and below this.
Here is an unordered list:
- Salt-n-Pepa
- Bel Biv DeVoe
- Kid ‘N Play
And an ordered list:
- Michael Jackson
- Michael Bolton
- Michael Bublé
And an unordered task list:
- Create a sample markdown document
- Add task lists to it
- Take a vacation
And a “mixed” task list:
- Steal underpants
- ?
- Profit!
And a nested list:
- Jackson 5
- Michael
- Tito
- Jackie
- Marlon
- Jermaine
- TMNT
- Leonardo
- Michelangelo
- Donatello
- Raphael
Definition lists can be used with Markdown syntax. Definition terms are bold.
- Name
- Godzilla
- Born
- 1952
- Birthplace
- Japan
- Color
- Green
Tables should have bold headings and alternating shaded rows.
Artist Album Year Michael Jackson Thriller 1982 Prince Purple Rain 1984 Beastie Boys License to Ill 1986
If a table is too wide, it should scroll horizontally.
Artist Album Year Label Awards Songs Michael Jackson Thriller 1982 Epic Records Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life Prince Purple Rain 1984 Warner Brothers Records Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain Beastie Boys License to Ill 1986 Mercury Records noawardsbutthistablecelliswide Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
+bar := "foo";
+
Code can also use syntax highlighting.
func main() {
input := `var foo = "bar";`
lexer := lexers.Get("javascript")
@@ -27,9 +27,9 @@
fmt.Println(buff.String())
}
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
-
Inline code inside table cells should still be distinguishable.
Language Code Javascript var foo = "bar";
Ruby foo = "bar"{
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Components
Alerts
This is an alert.Note:
This is an alert with a title.This is a successful alert.This is a warning!Warning!
This is a warning with a title!Sizing
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Parameters available
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using pixels
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using rem
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Memory
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
RAM to use
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
More is better
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Used RAM
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
+
Inline code inside table cells should still be distinguishable.
Language | Code |
---|---|
Javascript | var foo = "bar"; |
Ruby | foo = "bar"{ |
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
+
This is a typical blog post that includes images.
The front matter specifies the date of the blog post, its title, a short description that will be displayed on the blog landing page, and its author.
Here’s an image (featured-sunset-get.png
) that includes a byline and a caption.
Fetch and scale an image in the upcoming Hugo 0.43.
-
Photo: Riona MacNamara / CC-BY-CA
The front matter of this post specifies properties to be assigned to all image resources:
resources:
-- src: "**.{png,jpg}"
- title: "Image #:counter"
- params:
- byline: "Photo: Riona MacNamara / CC-BY-CA"
-
To include the image in a page, specify its details like this:
+
Photo: Riona MacNamara / CC-BY-CA
The front matter of this post specifies properties to be assigned to all image resources:
resources:
+- src: "**.{png,jpg}"
+ title: "Image #:counter"
+ params:
+ byline: "Photo: Riona MacNamara / CC-BY-CA"
+
To include the image in a page, specify its details like this:
@@ -28,7 +28,7 @@
The image will be rendered at the size and byline specified in the front matter.
The image will be rendered at the size and byline specified in the front matter.
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito occumbere, oves novem gestit haerebat frena; qui. Respicit recurvam erat: pignora hinc reppulit nos aut, aptos, ipsa.
Meae optatos passa est Epiros utiliter Talibus niveis, hoc lata, edidit. -Dixi ad aestum.
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is a code block following a header.
-
What | Follows |
---|---|
A table | A header |
A table | A header |
A table | A header |
There’s a horizontal rule above and below this.
Here is an unordered list:
And an ordered list:
And an unordered task list:
And a “mixed” task list:
And a nested list:
Definition lists can be used with Markdown syntax. Definition terms are bold.
Tables should have bold headings and alternating shaded rows.
Artist | Album | Year |
---|---|---|
Michael Jackson | Thriller | 1982 |
Prince | Purple Rain | 1984 |
Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should scroll horizontally.
Artist | Album | Year | Label | Awards | Songs |
---|---|---|---|---|---|
Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain |
Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
-bar := "foo";
-
Code can also use syntax highlighting.
func main() {
+Dixi ad aestum.Header 2
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Header 3
This is a code block following a header.
+
Header 4
- This is an unordered list following a header.
- This is an unordered list following a header.
- This is an unordered list following a header.
Header 5
- This is an ordered list following a header.
- This is an ordered list following a header.
- This is an ordered list following a header.
Header 6
What Follows A table A header A table A header A table A header
There’s a horizontal rule above and below this.
Here is an unordered list:
- Salt-n-Pepa
- Bel Biv DeVoe
- Kid ‘N Play
And an ordered list:
- Michael Jackson
- Michael Bolton
- Michael Bublé
And an unordered task list:
- Create a sample markdown document
- Add task lists to it
- Take a vacation
And a “mixed” task list:
- Steal underpants
- ?
- Profit!
And a nested list:
- Jackson 5
- Michael
- Tito
- Jackie
- Marlon
- Jermaine
- TMNT
- Leonardo
- Michelangelo
- Donatello
- Raphael
Definition lists can be used with Markdown syntax. Definition terms are bold.
- Name
- Godzilla
- Born
- 1952
- Birthplace
- Japan
- Color
- Green
Tables should have bold headings and alternating shaded rows.
Artist Album Year Michael Jackson Thriller 1982 Prince Purple Rain 1984 Beastie Boys License to Ill 1986
If a table is too wide, it should scroll horizontally.
Artist Album Year Label Awards Songs Michael Jackson Thriller 1982 Epic Records Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life Prince Purple Rain 1984 Warner Brothers Records Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain Beastie Boys License to Ill 1986 Mercury Records noawardsbutthistablecelliswide Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
+bar := "foo";
+
Code can also use syntax highlighting.
func main() {
input := `var foo = "bar";`
lexer := lexers.Get("javascript")
@@ -27,9 +27,9 @@
fmt.Println(buff.String())
}
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
-
Inline code inside table cells should still be distinguishable.
Language Code Javascript var foo = "bar";
Ruby foo = "bar"{
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Components
Alerts
This is an alert.Note:
This is an alert with a title.This is a successful alert.This is a warning!Warning!
This is a warning with a title!Sizing
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Parameters available
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using pixels
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using rem
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Memory
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
RAM to use
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
More is better
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Used RAM
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
+
Inline code inside table cells should still be distinguishable.
Language | Code |
---|---|
Javascript | var foo = "bar"; |
Ruby | foo = "bar"{ |
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
+
This is the multi-page printable view of this section. Click here to print.
This is the blog section. It has two categories: News and Releases.
Files in these directories will be listed in reverse chronological order.
This is a typical blog post that includes images.
The front matter specifies the date of the blog post, its title, a short description that will be displayed on the blog landing page, and its author.
Here’s an image (featured-sunset-get.png
) that includes a byline and a caption.
Fetch and scale an image in the upcoming Hugo 0.43.
-
Photo: Riona MacNamara / CC-BY-CA
The front matter of this post specifies properties to be assigned to all image resources:
resources:
-- src: "**.{png,jpg}"
- title: "Image #:counter"
- params:
- byline: "Photo: Riona MacNamara / CC-BY-CA"
-
To include the image in a page, specify its details like this:
+
Photo: Riona MacNamara / CC-BY-CA
The front matter of this post specifies properties to be assigned to all image resources:
resources:
+- src: "**.{png,jpg}"
+ title: "Image #:counter"
+ params:
+ byline: "Photo: Riona MacNamara / CC-BY-CA"
+
To include the image in a page, specify its details like this:
@@ -26,13 +26,13 @@
The image will be rendered at the size and byline specified in the front matter.
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito +
The image will be rendered at the size and byline specified in the front matter.
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito occumbere, oves novem gestit haerebat frena; qui. Respicit recurvam erat: pignora hinc reppulit nos aut, aptos, ipsa.
Meae optatos passa est Epiros utiliter Talibus niveis, hoc lata, edidit. -Dixi ad aestum.
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is a code block following a header.
-
What | Follows |
---|---|
A table | A header |
A table | A header |
A table | A header |
There’s a horizontal rule above and below this.
Here is an unordered list:
And an ordered list:
And an unordered task list:
And a “mixed” task list:
And a nested list:
Definition lists can be used with Markdown syntax. Definition terms are bold.
Tables should have bold headings and alternating shaded rows.
Artist | Album | Year |
---|---|---|
Michael Jackson | Thriller | 1982 |
Prince | Purple Rain | 1984 |
Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should scroll horizontally.
Artist | Album | Year | Label | Awards | Songs |
---|---|---|---|---|---|
Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain |
Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
-bar := "foo";
-
Code can also use syntax highlighting.
func main() {
+Dixi ad aestum.Header 2
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Header 3
This is a code block following a header.
+
Header 4
- This is an unordered list following a header.
- This is an unordered list following a header.
- This is an unordered list following a header.
Header 5
- This is an ordered list following a header.
- This is an ordered list following a header.
- This is an ordered list following a header.
Header 6
What Follows A table A header A table A header A table A header
There’s a horizontal rule above and below this.
Here is an unordered list:
- Salt-n-Pepa
- Bel Biv DeVoe
- Kid ‘N Play
And an ordered list:
- Michael Jackson
- Michael Bolton
- Michael Bublé
And an unordered task list:
- Create a sample markdown document
- Add task lists to it
- Take a vacation
And a “mixed” task list:
- Steal underpants
- ?
- Profit!
And a nested list:
- Jackson 5
- Michael
- Tito
- Jackie
- Marlon
- Jermaine
- TMNT
- Leonardo
- Michelangelo
- Donatello
- Raphael
Definition lists can be used with Markdown syntax. Definition terms are bold.
- Name
- Godzilla
- Born
- 1952
- Birthplace
- Japan
- Color
- Green
Tables should have bold headings and alternating shaded rows.
Artist Album Year Michael Jackson Thriller 1982 Prince Purple Rain 1984 Beastie Boys License to Ill 1986
If a table is too wide, it should scroll horizontally.
Artist Album Year Label Awards Songs Michael Jackson Thriller 1982 Epic Records Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life Prince Purple Rain 1984 Warner Brothers Records Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain Beastie Boys License to Ill 1986 Mercury Records noawardsbutthistablecelliswide Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
+bar := "foo";
+
Code can also use syntax highlighting.
func main() {
input := `var foo = "bar";`
lexer := lexers.Get("javascript")
@@ -45,15 +45,15 @@
fmt.Println(buff.String())
}
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
-
Inline code inside table cells should still be distinguishable.
Language Code Javascript var foo = "bar";
Ruby foo = "bar"{
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Components
Alerts
This is an alert.Note:
This is an alert with a title.This is a successful alert.This is a warning!Warning!
This is a warning with a title!Sizing
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Parameters available
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using pixels
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using rem
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Memory
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
RAM to use
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
More is better
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Used RAM
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
-
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito +
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
+
Inline code inside table cells should still be distinguishable.
Language | Code |
---|---|
Javascript | var foo = "bar"; |
Ruby | foo = "bar"{ |
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
+
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito occumbere, oves novem gestit haerebat frena; qui. Respicit recurvam erat: pignora hinc reppulit nos aut, aptos, ipsa.
Meae optatos passa est Epiros utiliter Talibus niveis, hoc lata, edidit. -Dixi ad aestum.
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is a code block following a header.
-
What | Follows |
---|---|
A table | A header |
A table | A header |
A table | A header |
There’s a horizontal rule above and below this.
Here is an unordered list:
And an ordered list:
And an unordered task list:
And a “mixed” task list:
And a nested list:
Definition lists can be used with Markdown syntax. Definition terms are bold.
Tables should have bold headings and alternating shaded rows.
Artist | Album | Year |
---|---|---|
Michael Jackson | Thriller | 1982 |
Prince | Purple Rain | 1984 |
Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should scroll horizontally.
Artist | Album | Year | Label | Awards | Songs |
---|---|---|---|---|---|
Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain |
Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
-bar := "foo";
-
Code can also use syntax highlighting.
func main() {
+Dixi ad aestum.Header 2
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Header 3
This is a code block following a header.
+
Header 4
- This is an unordered list following a header.
- This is an unordered list following a header.
- This is an unordered list following a header.
Header 5
- This is an ordered list following a header.
- This is an ordered list following a header.
- This is an ordered list following a header.
Header 6
What Follows A table A header A table A header A table A header
There’s a horizontal rule above and below this.
Here is an unordered list:
- Salt-n-Pepa
- Bel Biv DeVoe
- Kid ‘N Play
And an ordered list:
- Michael Jackson
- Michael Bolton
- Michael Bublé
And an unordered task list:
- Create a sample markdown document
- Add task lists to it
- Take a vacation
And a “mixed” task list:
- Steal underpants
- ?
- Profit!
And a nested list:
- Jackson 5
- Michael
- Tito
- Jackie
- Marlon
- Jermaine
- TMNT
- Leonardo
- Michelangelo
- Donatello
- Raphael
Definition lists can be used with Markdown syntax. Definition terms are bold.
- Name
- Godzilla
- Born
- 1952
- Birthplace
- Japan
- Color
- Green
Tables should have bold headings and alternating shaded rows.
Artist Album Year Michael Jackson Thriller 1982 Prince Purple Rain 1984 Beastie Boys License to Ill 1986
If a table is too wide, it should scroll horizontally.
Artist Album Year Label Awards Songs Michael Jackson Thriller 1982 Epic Records Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life Prince Purple Rain 1984 Warner Brothers Records Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain Beastie Boys License to Ill 1986 Mercury Records noawardsbutthistablecelliswide Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
+bar := "foo";
+
Code can also use syntax highlighting.
func main() {
input := `var foo = "bar";`
lexer := lexers.Get("javascript")
@@ -66,9 +66,9 @@
fmt.Println(buff.String())
}
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
-
Inline code inside table cells should still be distinguishable.
Language Code Javascript var foo = "bar";
Ruby foo = "bar"{
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Components
Alerts
This is an alert.Note:
This is an alert with a title.This is a successful alert.This is a warning!Warning!
This is a warning with a title!Sizing
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Parameters available
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using pixels
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using rem
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Memory
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
RAM to use
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
More is better
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Used RAM
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
+
Inline code inside table cells should still be distinguishable.
Language | Code |
---|---|
Javascript | var foo = "bar"; |
Ruby | foo = "bar"{ |
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
+
This is the multi-page printable view of this section. Click here to print.
This is a typical blog post that includes images.
The front matter specifies the date of the blog post, its title, a short description that will be displayed on the blog landing page, and its author.
Here’s an image (featured-sunset-get.png
) that includes a byline and a caption.
Fetch and scale an image in the upcoming Hugo 0.43.
-
Photo: Riona MacNamara / CC-BY-CA
The front matter of this post specifies properties to be assigned to all image resources:
resources:
-- src: "**.{png,jpg}"
- title: "Image #:counter"
- params:
- byline: "Photo: Riona MacNamara / CC-BY-CA"
-
To include the image in a page, specify its details like this:
+
Photo: Riona MacNamara / CC-BY-CA
The front matter of this post specifies properties to be assigned to all image resources:
resources:
+- src: "**.{png,jpg}"
+ title: "Image #:counter"
+ params:
+ byline: "Photo: Riona MacNamara / CC-BY-CA"
+
To include the image in a page, specify its details like this:
@@ -24,13 +24,13 @@
The image will be rendered at the size and byline specified in the front matter.
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito +
The image will be rendered at the size and byline specified in the front matter.
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito occumbere, oves novem gestit haerebat frena; qui. Respicit recurvam erat: pignora hinc reppulit nos aut, aptos, ipsa.
Meae optatos passa est Epiros utiliter Talibus niveis, hoc lata, edidit. -Dixi ad aestum.
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is a code block following a header.
-
What | Follows |
---|---|
A table | A header |
A table | A header |
A table | A header |
There’s a horizontal rule above and below this.
Here is an unordered list:
And an ordered list:
And an unordered task list:
And a “mixed” task list:
And a nested list:
Definition lists can be used with Markdown syntax. Definition terms are bold.
Tables should have bold headings and alternating shaded rows.
Artist | Album | Year |
---|---|---|
Michael Jackson | Thriller | 1982 |
Prince | Purple Rain | 1984 |
Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should scroll horizontally.
Artist | Album | Year | Label | Awards | Songs |
---|---|---|---|---|---|
Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain |
Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
-bar := "foo";
-
Code can also use syntax highlighting.
func main() {
+Dixi ad aestum.Header 2
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Header 3
This is a code block following a header.
+
Header 4
- This is an unordered list following a header.
- This is an unordered list following a header.
- This is an unordered list following a header.
Header 5
- This is an ordered list following a header.
- This is an ordered list following a header.
- This is an ordered list following a header.
Header 6
What Follows A table A header A table A header A table A header
There’s a horizontal rule above and below this.
Here is an unordered list:
- Salt-n-Pepa
- Bel Biv DeVoe
- Kid ‘N Play
And an ordered list:
- Michael Jackson
- Michael Bolton
- Michael Bublé
And an unordered task list:
- Create a sample markdown document
- Add task lists to it
- Take a vacation
And a “mixed” task list:
- Steal underpants
- ?
- Profit!
And a nested list:
- Jackson 5
- Michael
- Tito
- Jackie
- Marlon
- Jermaine
- TMNT
- Leonardo
- Michelangelo
- Donatello
- Raphael
Definition lists can be used with Markdown syntax. Definition terms are bold.
- Name
- Godzilla
- Born
- 1952
- Birthplace
- Japan
- Color
- Green
Tables should have bold headings and alternating shaded rows.
Artist Album Year Michael Jackson Thriller 1982 Prince Purple Rain 1984 Beastie Boys License to Ill 1986
If a table is too wide, it should scroll horizontally.
Artist Album Year Label Awards Songs Michael Jackson Thriller 1982 Epic Records Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life Prince Purple Rain 1984 Warner Brothers Records Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain Beastie Boys License to Ill 1986 Mercury Records noawardsbutthistablecelliswide Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
+bar := "foo";
+
Code can also use syntax highlighting.
func main() {
input := `var foo = "bar";`
lexer := lexers.Get("javascript")
@@ -43,9 +43,9 @@
fmt.Println(buff.String())
}
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
-
Inline code inside table cells should still be distinguishable.
Language Code Javascript var foo = "bar";
Ruby foo = "bar"{
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Components
Alerts
This is an alert.Note:
This is an alert with a title.This is a successful alert.This is a warning!Warning!
This is a warning with a title!Sizing
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Parameters available
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using pixels
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using rem
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Memory
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
RAM to use
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
More is better
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Used RAM
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
+
Inline code inside table cells should still be distinguishable.
Language | Code |
---|---|
Javascript | var foo = "bar"; |
Ruby | foo = "bar"{ |
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
+
Return to the regular view of this page.
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito occumbere, oves novem gestit haerebat frena; qui. Respicit recurvam erat: pignora hinc reppulit nos aut, aptos, ipsa.
Meae optatos passa est Epiros utiliter Talibus niveis, hoc lata, edidit. -Dixi ad aestum.
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is a code block following a header.
-
What | Follows |
---|---|
A table | A header |
A table | A header |
A table | A header |
There’s a horizontal rule above and below this.
Here is an unordered list:
And an ordered list:
And an unordered task list:
And a “mixed” task list:
And a nested list:
Definition lists can be used with Markdown syntax. Definition terms are bold.
Tables should have bold headings and alternating shaded rows.
Artist | Album | Year |
---|---|---|
Michael Jackson | Thriller | 1982 |
Prince | Purple Rain | 1984 |
Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should scroll horizontally.
Artist | Album | Year | Label | Awards | Songs |
---|---|---|---|---|---|
Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain |
Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
-bar := "foo";
-
Code can also use syntax highlighting.
func main() {
+Dixi ad aestum.Header 2
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Header 3
This is a code block following a header.
+
Header 4
- This is an unordered list following a header.
- This is an unordered list following a header.
- This is an unordered list following a header.
Header 5
- This is an ordered list following a header.
- This is an ordered list following a header.
- This is an ordered list following a header.
Header 6
What Follows A table A header A table A header A table A header
There’s a horizontal rule above and below this.
Here is an unordered list:
- Salt-n-Pepa
- Bel Biv DeVoe
- Kid ‘N Play
And an ordered list:
- Michael Jackson
- Michael Bolton
- Michael Bublé
And an unordered task list:
- Create a sample markdown document
- Add task lists to it
- Take a vacation
And a “mixed” task list:
- Steal underpants
- ?
- Profit!
And a nested list:
- Jackson 5
- Michael
- Tito
- Jackie
- Marlon
- Jermaine
- TMNT
- Leonardo
- Michelangelo
- Donatello
- Raphael
Definition lists can be used with Markdown syntax. Definition terms are bold.
- Name
- Godzilla
- Born
- 1952
- Birthplace
- Japan
- Color
- Green
Tables should have bold headings and alternating shaded rows.
Artist Album Year Michael Jackson Thriller 1982 Prince Purple Rain 1984 Beastie Boys License to Ill 1986
If a table is too wide, it should scroll horizontally.
Artist Album Year Label Awards Songs Michael Jackson Thriller 1982 Epic Records Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life Prince Purple Rain 1984 Warner Brothers Records Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain Beastie Boys License to Ill 1986 Mercury Records noawardsbutthistablecelliswide Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
+bar := "foo";
+
Code can also use syntax highlighting.
func main() {
input := `var foo = "bar";`
lexer := lexers.Get("javascript")
@@ -19,9 +19,9 @@
fmt.Println(buff.String())
}
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
-
Inline code inside table cells should still be distinguishable.
Language Code Javascript var foo = "bar";
Ruby foo = "bar"{
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Components
Alerts
This is an alert.Note:
This is an alert with a title.This is a successful alert.This is a warning!Warning!
This is a warning with a title!Sizing
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Parameters available
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using pixels
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using rem
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Memory
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
RAM to use
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
More is better
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Used RAM
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
+
Inline code inside table cells should still be distinguishable.
Language | Code |
---|---|
Javascript | var foo = "bar"; |
Ruby | foo = "bar"{ |
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
+
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito occumbere, oves novem gestit haerebat frena; qui. Respicit recurvam erat: pignora hinc reppulit nos aut, aptos, ipsa.
Meae optatos passa est Epiros utiliter Talibus niveis, hoc lata, edidit. -Dixi ad aestum.
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is a code block following a header.
-
What | Follows |
---|---|
A table | A header |
A table | A header |
A table | A header |
There’s a horizontal rule above and below this.
Here is an unordered list:
And an ordered list:
And an unordered task list:
And a “mixed” task list:
And a nested list:
Definition lists can be used with Markdown syntax. Definition terms are bold.
Tables should have bold headings and alternating shaded rows.
Artist | Album | Year |
---|---|---|
Michael Jackson | Thriller | 1982 |
Prince | Purple Rain | 1984 |
Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should scroll horizontally.
Artist | Album | Year | Label | Awards | Songs |
---|---|---|---|---|---|
Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain |
Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
-bar := "foo";
-
Code can also use syntax highlighting.
func main() {
+Dixi ad aestum.Header 2
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Header 3
This is a code block following a header.
+
Header 4
- This is an unordered list following a header.
- This is an unordered list following a header.
- This is an unordered list following a header.
Header 5
- This is an ordered list following a header.
- This is an ordered list following a header.
- This is an ordered list following a header.
Header 6
What Follows A table A header A table A header A table A header
There’s a horizontal rule above and below this.
Here is an unordered list:
- Salt-n-Pepa
- Bel Biv DeVoe
- Kid ‘N Play
And an ordered list:
- Michael Jackson
- Michael Bolton
- Michael Bublé
And an unordered task list:
- Create a sample markdown document
- Add task lists to it
- Take a vacation
And a “mixed” task list:
- Steal underpants
- ?
- Profit!
And a nested list:
- Jackson 5
- Michael
- Tito
- Jackie
- Marlon
- Jermaine
- TMNT
- Leonardo
- Michelangelo
- Donatello
- Raphael
Definition lists can be used with Markdown syntax. Definition terms are bold.
- Name
- Godzilla
- Born
- 1952
- Birthplace
- Japan
- Color
- Green
Tables should have bold headings and alternating shaded rows.
Artist Album Year Michael Jackson Thriller 1982 Prince Purple Rain 1984 Beastie Boys License to Ill 1986
If a table is too wide, it should scroll horizontally.
Artist Album Year Label Awards Songs Michael Jackson Thriller 1982 Epic Records Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life Prince Purple Rain 1984 Warner Brothers Records Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain Beastie Boys License to Ill 1986 Mercury Records noawardsbutthistablecelliswide Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
+bar := "foo";
+
Code can also use syntax highlighting.
func main() {
input := `var foo = "bar";`
lexer := lexers.Get("javascript")
@@ -27,9 +27,9 @@
fmt.Println(buff.String())
}
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
-
Inline code inside table cells should still be distinguishable.
Language Code Javascript var foo = "bar";
Ruby foo = "bar"{
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Components
Alerts
This is an alert.Note:
This is an alert with a title.This is a successful alert.This is a warning!Warning!
This is a warning with a title!Sizing
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Parameters available
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using pixels
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using rem
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Memory
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
RAM to use
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
More is better
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Used RAM
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
+
Inline code inside table cells should still be distinguishable.
Language | Code |
---|---|
Javascript | var foo = "bar"; |
Ruby | foo = "bar"{ |
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
+
This is a typical blog post that includes images.
The front matter specifies the date of the blog post, its title, a short description that will be displayed on the blog landing page, and its author.
Here’s an image (featured-sunset-get.png
) that includes a byline and a caption.
Fetch and scale an image in the upcoming Hugo 0.43.
-
Photo: Riona MacNamara / CC-BY-CA
The front matter of this post specifies properties to be assigned to all image resources:
resources:
-- src: "**.{png,jpg}"
- title: "Image #:counter"
- params:
- byline: "Photo: Riona MacNamara / CC-BY-CA"
-
To include the image in a page, specify its details like this:
+
Photo: Riona MacNamara / CC-BY-CA
The front matter of this post specifies properties to be assigned to all image resources:
resources:
+- src: "**.{png,jpg}"
+ title: "Image #:counter"
+ params:
+ byline: "Photo: Riona MacNamara / CC-BY-CA"
+
To include the image in a page, specify its details like this:
@@ -28,7 +28,7 @@
The image will be rendered at the size and byline specified in the front matter.
The image will be rendered at the size and byline specified in the front matter.
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito occumbere, oves novem gestit haerebat frena; qui. Respicit recurvam erat: pignora hinc reppulit nos aut, aptos, ipsa.
Meae optatos passa est Epiros utiliter Talibus niveis, hoc lata, edidit. -Dixi ad aestum.
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is a code block following a header.
-
What | Follows |
---|---|
A table | A header |
A table | A header |
A table | A header |
There’s a horizontal rule above and below this.
Here is an unordered list:
And an ordered list:
And an unordered task list:
And a “mixed” task list:
And a nested list:
Definition lists can be used with Markdown syntax. Definition terms are bold.
Tables should have bold headings and alternating shaded rows.
Artist | Album | Year |
---|---|---|
Michael Jackson | Thriller | 1982 |
Prince | Purple Rain | 1984 |
Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should scroll horizontally.
Artist | Album | Year | Label | Awards | Songs |
---|---|---|---|---|---|
Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain |
Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
-bar := "foo";
-
Code can also use syntax highlighting.
func main() {
+Dixi ad aestum.Header 2
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Header 3
This is a code block following a header.
+
Header 4
- This is an unordered list following a header.
- This is an unordered list following a header.
- This is an unordered list following a header.
Header 5
- This is an ordered list following a header.
- This is an ordered list following a header.
- This is an ordered list following a header.
Header 6
What Follows A table A header A table A header A table A header
There’s a horizontal rule above and below this.
Here is an unordered list:
- Salt-n-Pepa
- Bel Biv DeVoe
- Kid ‘N Play
And an ordered list:
- Michael Jackson
- Michael Bolton
- Michael Bublé
And an unordered task list:
- Create a sample markdown document
- Add task lists to it
- Take a vacation
And a “mixed” task list:
- Steal underpants
- ?
- Profit!
And a nested list:
- Jackson 5
- Michael
- Tito
- Jackie
- Marlon
- Jermaine
- TMNT
- Leonardo
- Michelangelo
- Donatello
- Raphael
Definition lists can be used with Markdown syntax. Definition terms are bold.
- Name
- Godzilla
- Born
- 1952
- Birthplace
- Japan
- Color
- Green
Tables should have bold headings and alternating shaded rows.
Artist Album Year Michael Jackson Thriller 1982 Prince Purple Rain 1984 Beastie Boys License to Ill 1986
If a table is too wide, it should scroll horizontally.
Artist Album Year Label Awards Songs Michael Jackson Thriller 1982 Epic Records Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life Prince Purple Rain 1984 Warner Brothers Records Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain Beastie Boys License to Ill 1986 Mercury Records noawardsbutthistablecelliswide Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
+bar := "foo";
+
Code can also use syntax highlighting.
func main() {
input := `var foo = "bar";`
lexer := lexers.Get("javascript")
@@ -27,9 +27,9 @@
fmt.Println(buff.String())
}
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
-
Inline code inside table cells should still be distinguishable.
Language Code Javascript var foo = "bar";
Ruby foo = "bar"{
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Components
Alerts
This is an alert.Note:
This is an alert with a title.This is a successful alert.This is a warning!Warning!
This is a warning with a title!Sizing
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Parameters available
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using pixels
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using rem
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Memory
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
RAM to use
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
More is better
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Used RAM
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
+
Inline code inside table cells should still be distinguishable.
Language | Code |
---|---|
Javascript | var foo = "bar"; |
Ruby | foo = "bar"{ |
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
+
This is the multi-page printable view of this section. Click here to print.
This is the blog section. It has two categories: News and Releases.
Files in these directories will be listed in reverse chronological order.
This is a typical blog post that includes images.
The front matter specifies the date of the blog post, its title, a short description that will be displayed on the blog landing page, and its author.
Here’s an image (featured-sunset-get.png
) that includes a byline and a caption.
Fetch and scale an image in the upcoming Hugo 0.43.
-
Photo: Riona MacNamara / CC-BY-CA
The front matter of this post specifies properties to be assigned to all image resources:
resources:
-- src: "**.{png,jpg}"
- title: "Image #:counter"
- params:
- byline: "Photo: Riona MacNamara / CC-BY-CA"
-
To include the image in a page, specify its details like this:
+
Photo: Riona MacNamara / CC-BY-CA
The front matter of this post specifies properties to be assigned to all image resources:
resources:
+- src: "**.{png,jpg}"
+ title: "Image #:counter"
+ params:
+ byline: "Photo: Riona MacNamara / CC-BY-CA"
+
To include the image in a page, specify its details like this:
@@ -26,13 +26,13 @@
The image will be rendered at the size and byline specified in the front matter.
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito +
The image will be rendered at the size and byline specified in the front matter.
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito occumbere, oves novem gestit haerebat frena; qui. Respicit recurvam erat: pignora hinc reppulit nos aut, aptos, ipsa.
Meae optatos passa est Epiros utiliter Talibus niveis, hoc lata, edidit. -Dixi ad aestum.
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is a code block following a header.
-
What | Follows |
---|---|
A table | A header |
A table | A header |
A table | A header |
There’s a horizontal rule above and below this.
Here is an unordered list:
And an ordered list:
And an unordered task list:
And a “mixed” task list:
And a nested list:
Definition lists can be used with Markdown syntax. Definition terms are bold.
Tables should have bold headings and alternating shaded rows.
Artist | Album | Year |
---|---|---|
Michael Jackson | Thriller | 1982 |
Prince | Purple Rain | 1984 |
Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should scroll horizontally.
Artist | Album | Year | Label | Awards | Songs |
---|---|---|---|---|---|
Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain |
Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
-bar := "foo";
-
Code can also use syntax highlighting.
func main() {
+Dixi ad aestum.Header 2
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Header 3
This is a code block following a header.
+
Header 4
- This is an unordered list following a header.
- This is an unordered list following a header.
- This is an unordered list following a header.
Header 5
- This is an ordered list following a header.
- This is an ordered list following a header.
- This is an ordered list following a header.
Header 6
What Follows A table A header A table A header A table A header
There’s a horizontal rule above and below this.
Here is an unordered list:
- Salt-n-Pepa
- Bel Biv DeVoe
- Kid ‘N Play
And an ordered list:
- Michael Jackson
- Michael Bolton
- Michael Bublé
And an unordered task list:
- Create a sample markdown document
- Add task lists to it
- Take a vacation
And a “mixed” task list:
- Steal underpants
- ?
- Profit!
And a nested list:
- Jackson 5
- Michael
- Tito
- Jackie
- Marlon
- Jermaine
- TMNT
- Leonardo
- Michelangelo
- Donatello
- Raphael
Definition lists can be used with Markdown syntax. Definition terms are bold.
- Name
- Godzilla
- Born
- 1952
- Birthplace
- Japan
- Color
- Green
Tables should have bold headings and alternating shaded rows.
Artist Album Year Michael Jackson Thriller 1982 Prince Purple Rain 1984 Beastie Boys License to Ill 1986
If a table is too wide, it should scroll horizontally.
Artist Album Year Label Awards Songs Michael Jackson Thriller 1982 Epic Records Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life Prince Purple Rain 1984 Warner Brothers Records Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain Beastie Boys License to Ill 1986 Mercury Records noawardsbutthistablecelliswide Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
+bar := "foo";
+
Code can also use syntax highlighting.
func main() {
input := `var foo = "bar";`
lexer := lexers.Get("javascript")
@@ -45,15 +45,15 @@
fmt.Println(buff.String())
}
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
-
Inline code inside table cells should still be distinguishable.
Language Code Javascript var foo = "bar";
Ruby foo = "bar"{
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Components
Alerts
This is an alert.Note:
This is an alert with a title.This is a successful alert.This is a warning!Warning!
This is a warning with a title!Sizing
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Parameters available
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using pixels
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using rem
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Memory
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
RAM to use
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
More is better
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Used RAM
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
-
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito +
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
+
Inline code inside table cells should still be distinguishable.
Language | Code |
---|---|
Javascript | var foo = "bar"; |
Ruby | foo = "bar"{ |
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
+
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito occumbere, oves novem gestit haerebat frena; qui. Respicit recurvam erat: pignora hinc reppulit nos aut, aptos, ipsa.
Meae optatos passa est Epiros utiliter Talibus niveis, hoc lata, edidit. -Dixi ad aestum.
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is a code block following a header.
-
What | Follows |
---|---|
A table | A header |
A table | A header |
A table | A header |
There’s a horizontal rule above and below this.
Here is an unordered list:
And an ordered list:
And an unordered task list:
And a “mixed” task list:
And a nested list:
Definition lists can be used with Markdown syntax. Definition terms are bold.
Tables should have bold headings and alternating shaded rows.
Artist | Album | Year |
---|---|---|
Michael Jackson | Thriller | 1982 |
Prince | Purple Rain | 1984 |
Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should scroll horizontally.
Artist | Album | Year | Label | Awards | Songs |
---|---|---|---|---|---|
Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain |
Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
-bar := "foo";
-
Code can also use syntax highlighting.
func main() {
+Dixi ad aestum.Header 2
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Header 3
This is a code block following a header.
+
Header 4
- This is an unordered list following a header.
- This is an unordered list following a header.
- This is an unordered list following a header.
Header 5
- This is an ordered list following a header.
- This is an ordered list following a header.
- This is an ordered list following a header.
Header 6
What Follows A table A header A table A header A table A header
There’s a horizontal rule above and below this.
Here is an unordered list:
- Salt-n-Pepa
- Bel Biv DeVoe
- Kid ‘N Play
And an ordered list:
- Michael Jackson
- Michael Bolton
- Michael Bublé
And an unordered task list:
- Create a sample markdown document
- Add task lists to it
- Take a vacation
And a “mixed” task list:
- Steal underpants
- ?
- Profit!
And a nested list:
- Jackson 5
- Michael
- Tito
- Jackie
- Marlon
- Jermaine
- TMNT
- Leonardo
- Michelangelo
- Donatello
- Raphael
Definition lists can be used with Markdown syntax. Definition terms are bold.
- Name
- Godzilla
- Born
- 1952
- Birthplace
- Japan
- Color
- Green
Tables should have bold headings and alternating shaded rows.
Artist Album Year Michael Jackson Thriller 1982 Prince Purple Rain 1984 Beastie Boys License to Ill 1986
If a table is too wide, it should scroll horizontally.
Artist Album Year Label Awards Songs Michael Jackson Thriller 1982 Epic Records Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life Prince Purple Rain 1984 Warner Brothers Records Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain Beastie Boys License to Ill 1986 Mercury Records noawardsbutthistablecelliswide Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
+bar := "foo";
+
Code can also use syntax highlighting.
func main() {
input := `var foo = "bar";`
lexer := lexers.Get("javascript")
@@ -66,9 +66,9 @@
fmt.Println(buff.String())
}
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
-
Inline code inside table cells should still be distinguishable.
Language Code Javascript var foo = "bar";
Ruby foo = "bar"{
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Components
Alerts
This is an alert.Note:
This is an alert with a title.This is a successful alert.This is a warning!Warning!
This is a warning with a title!Sizing
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Parameters available
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using pixels
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using rem
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Memory
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
RAM to use
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
More is better
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Used RAM
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
+
Inline code inside table cells should still be distinguishable.
Language | Code |
---|---|
Javascript | var foo = "bar"; |
Ruby | foo = "bar"{ |
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
+
This is the multi-page printable view of this section. Click here to print.
This is a typical blog post that includes images.
The front matter specifies the date of the blog post, its title, a short description that will be displayed on the blog landing page, and its author.
Here’s an image (featured-sunset-get.png
) that includes a byline and a caption.
Fetch and scale an image in the upcoming Hugo 0.43.
-
Photo: Riona MacNamara / CC-BY-CA
The front matter of this post specifies properties to be assigned to all image resources:
resources:
-- src: "**.{png,jpg}"
- title: "Image #:counter"
- params:
- byline: "Photo: Riona MacNamara / CC-BY-CA"
-
To include the image in a page, specify its details like this:
+
Photo: Riona MacNamara / CC-BY-CA
The front matter of this post specifies properties to be assigned to all image resources:
resources:
+- src: "**.{png,jpg}"
+ title: "Image #:counter"
+ params:
+ byline: "Photo: Riona MacNamara / CC-BY-CA"
+
To include the image in a page, specify its details like this:
@@ -24,13 +24,13 @@
The image will be rendered at the size and byline specified in the front matter.
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito +
The image will be rendered at the size and byline specified in the front matter.
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito occumbere, oves novem gestit haerebat frena; qui. Respicit recurvam erat: pignora hinc reppulit nos aut, aptos, ipsa.
Meae optatos passa est Epiros utiliter Talibus niveis, hoc lata, edidit. -Dixi ad aestum.
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is a code block following a header.
-
What | Follows |
---|---|
A table | A header |
A table | A header |
A table | A header |
There’s a horizontal rule above and below this.
Here is an unordered list:
And an ordered list:
And an unordered task list:
And a “mixed” task list:
And a nested list:
Definition lists can be used with Markdown syntax. Definition terms are bold.
Tables should have bold headings and alternating shaded rows.
Artist | Album | Year |
---|---|---|
Michael Jackson | Thriller | 1982 |
Prince | Purple Rain | 1984 |
Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should scroll horizontally.
Artist | Album | Year | Label | Awards | Songs |
---|---|---|---|---|---|
Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain |
Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
-bar := "foo";
-
Code can also use syntax highlighting.
func main() {
+Dixi ad aestum.Header 2
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Header 3
This is a code block following a header.
+
Header 4
- This is an unordered list following a header.
- This is an unordered list following a header.
- This is an unordered list following a header.
Header 5
- This is an ordered list following a header.
- This is an ordered list following a header.
- This is an ordered list following a header.
Header 6
What Follows A table A header A table A header A table A header
There’s a horizontal rule above and below this.
Here is an unordered list:
- Salt-n-Pepa
- Bel Biv DeVoe
- Kid ‘N Play
And an ordered list:
- Michael Jackson
- Michael Bolton
- Michael Bublé
And an unordered task list:
- Create a sample markdown document
- Add task lists to it
- Take a vacation
And a “mixed” task list:
- Steal underpants
- ?
- Profit!
And a nested list:
- Jackson 5
- Michael
- Tito
- Jackie
- Marlon
- Jermaine
- TMNT
- Leonardo
- Michelangelo
- Donatello
- Raphael
Definition lists can be used with Markdown syntax. Definition terms are bold.
- Name
- Godzilla
- Born
- 1952
- Birthplace
- Japan
- Color
- Green
Tables should have bold headings and alternating shaded rows.
Artist Album Year Michael Jackson Thriller 1982 Prince Purple Rain 1984 Beastie Boys License to Ill 1986
If a table is too wide, it should scroll horizontally.
Artist Album Year Label Awards Songs Michael Jackson Thriller 1982 Epic Records Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life Prince Purple Rain 1984 Warner Brothers Records Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain Beastie Boys License to Ill 1986 Mercury Records noawardsbutthistablecelliswide Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
+bar := "foo";
+
Code can also use syntax highlighting.
func main() {
input := `var foo = "bar";`
lexer := lexers.Get("javascript")
@@ -43,9 +43,9 @@
fmt.Println(buff.String())
}
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
-
Inline code inside table cells should still be distinguishable.
Language Code Javascript var foo = "bar";
Ruby foo = "bar"{
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Components
Alerts
This is an alert.Note:
This is an alert with a title.This is a successful alert.This is a warning!Warning!
This is a warning with a title!Sizing
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Parameters available
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using pixels
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using rem
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Memory
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
RAM to use
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
More is better
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Used RAM
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
+
Inline code inside table cells should still be distinguishable.
Language | Code |
---|---|
Javascript | var foo = "bar"; |
Ruby | foo = "bar"{ |
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
+
Return to the regular view of this page.
Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be no margin above this first sentence.
Blockquotes should be a lighter gray with a border along the left side in the secondary color.
There should be no margin below this final sentence.
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
On big screens, paragraphs and headings should not take up the full container width, but we want tables, code blocks and similar to take the full width.
Lorem markdownum tuta hospes stabat; idem saxum facit quaterque repetito occumbere, oves novem gestit haerebat frena; qui. Respicit recurvam erat: pignora hinc reppulit nos aut, aptos, ipsa.
Meae optatos passa est Epiros utiliter Talibus niveis, hoc lata, edidit. -Dixi ad aestum.
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is a code block following a header.
-
What | Follows |
---|---|
A table | A header |
A table | A header |
A table | A header |
There’s a horizontal rule above and below this.
Here is an unordered list:
And an ordered list:
And an unordered task list:
And a “mixed” task list:
And a nested list:
Definition lists can be used with Markdown syntax. Definition terms are bold.
Tables should have bold headings and alternating shaded rows.
Artist | Album | Year |
---|---|---|
Michael Jackson | Thriller | 1982 |
Prince | Purple Rain | 1984 |
Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should scroll horizontally.
Artist | Album | Year | Label | Awards | Songs |
---|---|---|---|---|---|
Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain |
Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
-bar := "foo";
-
Code can also use syntax highlighting.
func main() {
+Dixi ad aestum.Header 2
This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Header 3
This is a code block following a header.
+
Header 4
- This is an unordered list following a header.
- This is an unordered list following a header.
- This is an unordered list following a header.
Header 5
- This is an ordered list following a header.
- This is an ordered list following a header.
- This is an ordered list following a header.
Header 6
What Follows A table A header A table A header A table A header
There’s a horizontal rule above and below this.
Here is an unordered list:
- Salt-n-Pepa
- Bel Biv DeVoe
- Kid ‘N Play
And an ordered list:
- Michael Jackson
- Michael Bolton
- Michael Bublé
And an unordered task list:
- Create a sample markdown document
- Add task lists to it
- Take a vacation
And a “mixed” task list:
- Steal underpants
- ?
- Profit!
And a nested list:
- Jackson 5
- Michael
- Tito
- Jackie
- Marlon
- Jermaine
- TMNT
- Leonardo
- Michelangelo
- Donatello
- Raphael
Definition lists can be used with Markdown syntax. Definition terms are bold.
- Name
- Godzilla
- Born
- 1952
- Birthplace
- Japan
- Color
- Green
Tables should have bold headings and alternating shaded rows.
Artist Album Year Michael Jackson Thriller 1982 Prince Purple Rain 1984 Beastie Boys License to Ill 1986
If a table is too wide, it should scroll horizontally.
Artist Album Year Label Awards Songs Michael Jackson Thriller 1982 Epic Records Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical Wanna Be Startin’ Somethin’, Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life Prince Purple Rain 1984 Warner Brothers Records Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal Let’s Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I’m a Star, Purple Rain Beastie Boys License to Ill 1986 Mercury Records noawardsbutthistablecelliswide Rhymin & Stealin, The New Style, She’s Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill
Code snippets like var foo = "bar";
can be shown inline.
Also, this should vertically align
with this
and this.
Code can also be shown in a block element.
foo := "bar";
+bar := "foo";
+
Code can also use syntax highlighting.
func main() {
input := `var foo = "bar";`
lexer := lexers.Get("javascript")
@@ -19,9 +19,9 @@
fmt.Println(buff.String())
}
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
-
Inline code inside table cells should still be distinguishable.
Language Code Javascript var foo = "bar";
Ruby foo = "bar"{
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Components
Alerts
This is an alert.Note:
This is an alert with a title.This is a successful alert.This is a warning!Warning!
This is a warning with a title!Sizing
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Parameters available
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using pixels
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Using rem
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Memory
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
RAM to use
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
More is better
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Used RAM
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
-
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
+
Inline code inside table cells should still be distinguishable.
Language | Code |
---|---|
Javascript | var foo = "bar"; |
Ruby | foo = "bar"{ |
Small images should be shown at their actual size.
Large images should always scale down and fit in the content container.
Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
This is the final element on the page and there should be no margin below this.
+
本系统的主要应用场景是解决反欺诈、威胁情报、黑产打击等业务的图数据存储和建模分析需求,在此基础上逐步扩展及支持了更多的通用图应用。
HugeGraph支持在线及离线环境下的图操作,支持批量导入数据,支持高效的复杂关联关系分析,并且能够与大数据平台无缝集成。 HugeGraph支持多用户并行操作,用户可输入Gremlin查询语句,并及时得到图查询结果,也可在用户程序中调用HugeGraph API进行图分析或查询。
本系统具备如下特点:
本系统的功能包括但不限于:
The latest HugeGraph: 1.0.0, released on 2023-02-22(how to build from source).
components | description | download |
---|---|---|
HugeGraph-Server | HugeGraph 的主程序 | 1.0.0(备用) |
HugeGraph-Toolchain | 数据导入/导出/备份, Web 可视化图形界面等工具合集 | 1.0.0(备用) |
Version | Release Date | server | toolchain | computer | Release Notes |
---|---|---|---|---|---|
1.0.0 | 2023-02-22 | [Binary] [Sign] [SHA512] | [Binary] [Sign] [SHA512] | [Binary] [Sign] [SHA512] | Release-Notes |
Version | Release Date | server | toolchain | computer | common | Release Notes |
---|---|---|---|---|---|---|
1.0.0 | 2023-02-22 | [Source] [Sign] [SHA512] | [Source] [Sign] [SHA512] | [Source] [Sign] [SHA512] | [Source] [Sign] [SHA512] | Release-Notes |
说明:最新的图分析和展示平台为 hubble,支持 0.10 及之后的 server 版本;studio 为 server 0.10.x 以及之前的版本的图分析和展示平台,其功能从 0.10 起不再更新。
HugeGraph-Server 是 HugeGraph 项目的核心部分,包含Core、Backend、API等子模块。
Core模块是Tinkerpop接口的实现,Backend模块用于管理数据存储,目前支持的后端包括:Memory、Cassandra、ScyllaDB以及RocksDB,API模块提供HTTP Server,将Client的HTTP请求转化为对Core的调用。
文档中会大量出现
HugeGraph-Server
及HugeGraphServer
这两种写法,其他组件也类似。这两种写法含义上并无大的差异,可以这么区分:HugeGraph-Server
表示服务端相关组件代码,HugeGraphServer
表示服务进程。
请优先考虑在 Java11 的环境上启动 HugeGraph-Server
, 目前同时保留对 Java8 的兼容
在往下阅读之前务必执行java -version
命令查看jdk版本
java -version
如果使用的是RocksDB后端,请务必执行gcc --version
命令查看gcc版本;若使用其他后端,则不需要。
gcc --version
-
有三种方式可以部署HugeGraph-Server组件:
HugeGraph-Tools 提供了一键部署的命令行工具,用户可以使用该工具快速地一键下载、解压、配置并启动 HugeGraph-Server 和 HugeGraph-Hubble +
有三种方式可以部署HugeGraph-Server组件:
HugeGraph-Tools 提供了一键部署的命令行工具,用户可以使用该工具快速地一键下载、解压、配置并启动 HugeGraph-Server 和 HugeGraph-Hubble 最新的 HugeGraph-Toolchain 中已经包含所有的这些工具, 直接下载它解压就有工具包集合了
# download toolchain package, it includes loader + tool + hubble, please check the latest version (here is 1.0.0)
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph-*.tar.gz
# enter the tool's package
cd *hugegraph*/*tool*
注:${version}为版本号,最新版本号可参考Download页面,或直接从Download页面点击链接下载
HugeGraph-Tools 的总入口脚本是bin/hugegraph
,用户可以使用help
子命令查看其用法,这里只介绍一键部署的命令。
bin/hugegraph deploy -v {hugegraph-version} -p {install-path} [-u {download-path-prefix}]
{hugegraph-version}
表示要部署的HugeGraphServer及HugeGraphStudio的版本,用户可查看conf/version-mapping.yaml
文件获取版本信息,{install-path}
指定HugeGraphServer及HugeGraphStudio的安装目录,{download-path-prefix}
可选,指定HugeGraphServer及HugeGraphStudio tar包的下载地址,不提供时使用默认下载地址,比如要启动 0.6 版本的HugeGraph-Server及HugeGraphStudio将上述命令写为bin/hugegraph deploy -v 0.6 -p services
即可。
# use the latest version, here is 1.0.0 for example
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
源码编译前请确保安装了wget命令
下载HugeGraph源代码
git clone https://github.com/apache/hugegraph.git
编译打包生成tar包
cd hugegraph
@@ -39,37 +39,37 @@
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
......
-
执行成功后,在hugegraph目录下生成 hugegraph-*.tar.gz 文件,就是编译生成的tar包。
如果需要快速启动HugeGraph仅用于测试,那么只需要进行少数几个配置项的修改即可(见下一节)。 +
执行成功后,在hugegraph目录下生成 hugegraph-*.tar.gz 文件,就是编译生成的tar包。
可参考Docker部署方式。
如果需要快速启动HugeGraph仅用于测试,那么只需要进行少数几个配置项的修改即可(见下一节)。 详细的配置介绍请参考配置文档及配置项介绍
启动分为"首次启动"和"非首次启动",这么区分是因为在第一次启动前需要初始化后端数据库,然后启动服务。 而在人为停掉服务后,或者其他原因需要再次启动服务时,因为后端数据库是持久化存在的,直接启动服务即可。
HugeGraphServer启动时会连接后端存储并尝试检查后端存储版本号,如果未初始化后端或者后端已初始化但版本不匹配时(旧版本数据),HugeGraphServer会启动失败,并给出错误信息。
如果需要外部访问HugeGraphServer,请修改rest-server.properties
的restserver.url
配置项
-(默认为http://127.0.0.1:8080
),修改成机器名或IP地址。
由于各种后端所需的配置(hugegraph.properties)及启动步骤略有不同,下面逐一对各后端的配置及启动做介绍。
修改 hugegraph.properties
backend=memory
-serializer=text
-
Memory后端的数据是保存在内存中无法持久化的,不需要初始化后端,这也是唯一一个不需要初始化的后端。
启动 server
bin/start-hugegraph.sh
+(默认为http://127.0.0.1:8080
),修改成机器名或IP地址。由于各种后端所需的配置(hugegraph.properties)及启动步骤略有不同,下面逐一对各后端的配置及启动做介绍。
5.1 Memory
修改 hugegraph.properties
backend=memory
+serializer=text
+
Memory后端的数据是保存在内存中无法持久化的,不需要初始化后端,这也是唯一一个不需要初始化的后端。
启动 server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
提示的 url 与 rest-server.properties 中配置的 restserver.url 一致
5.2 RocksDB
RocksDB是一个嵌入式的数据库,不需要手动安装部署, 要求 GCC 版本 >= 4.3.0(GLIBCXX_3.4.10),如不满足,需要提前升级 GCC
修改 hugegraph.properties
backend=rocksdb
-serializer=binary
-rocksdb.data_path=.
-rocksdb.wal_path=.
-
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
+
提示的 url 与 rest-server.properties 中配置的 restserver.url 一致
5.2 RocksDB
RocksDB是一个嵌入式的数据库,不需要手动安装部署, 要求 GCC 版本 >= 4.3.0(GLIBCXX_3.4.10),如不满足,需要提前升级 GCC
修改 hugegraph.properties
backend=rocksdb
+serializer=binary
+rocksdb.data_path=.
+rocksdb.wal_path=.
+
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
bin/init-store.sh
启动server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
5.3 Cassandra
用户需自行安装 Cassandra,要求版本 3.0 以上,下载地址
修改 hugegraph.properties
backend=cassandra
-serializer=cassandra
-
-# cassandra backend config
-cassandra.host=localhost
-cassandra.port=9042
-cassandra.username=
-cassandra.password=
-#cassandra.connect_timeout=5
-#cassandra.read_timeout=20
-
-#cassandra.keyspace.strategy=SimpleStrategy
-#cassandra.keyspace.replication=3
-
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
+
5.3 Cassandra
用户需自行安装 Cassandra,要求版本 3.0 以上,下载地址
修改 hugegraph.properties
backend=cassandra
+serializer=cassandra
+
+# cassandra backend config
+cassandra.host=localhost
+cassandra.port=9042
+cassandra.username=
+cassandra.password=
+#cassandra.connect_timeout=5
+#cassandra.read_timeout=20
+
+#cassandra.keyspace.strategy=SimpleStrategy
+#cassandra.keyspace.replication=3
+
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
bin/init-store.sh
Initing HugeGraph Store...
2017-12-01 11:26:51 1424 [main] [INFO ] com.baidu.hugegraph.HugeGraph [] - Opening backend store: 'cassandra'
@@ -91,36 +91,36 @@
启动server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
5.4 ScyllaDB
用户需自行安装 ScyllaDB,推荐版本 2.1 以上,下载地址
修改 hugegraph.properties
backend=scylladb
-serializer=scylladb
-
-# cassandra backend config
-cassandra.host=localhost
-cassandra.port=9042
-cassandra.username=
-cassandra.password=
-#cassandra.connect_timeout=5
-#cassandra.read_timeout=20
-
-#cassandra.keyspace.strategy=SimpleStrategy
-#cassandra.keyspace.replication=3
-
由于 scylladb 数据库本身就是基于 cassandra 的"优化版",如果用户未安装 scylladb ,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
+
5.4 ScyllaDB
用户需自行安装 ScyllaDB,推荐版本 2.1 以上,下载地址
修改 hugegraph.properties
backend=scylladb
+serializer=scylladb
+
+# cassandra backend config
+cassandra.host=localhost
+cassandra.port=9042
+cassandra.username=
+cassandra.password=
+#cassandra.connect_timeout=5
+#cassandra.read_timeout=20
+
+#cassandra.keyspace.strategy=SimpleStrategy
+#cassandra.keyspace.replication=3
+
由于 scylladb 数据库本身就是基于 cassandra 的"优化版",如果用户未安装 scylladb ,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
bin/init-store.sh
启动server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
5.5 HBase
用户需自行安装 HBase,要求版本 2.0 以上,下载地址
修改 hugegraph.properties
backend=hbase
-serializer=hbase
-
-# hbase backend config
-hbase.hosts=localhost
-hbase.port=2181
-# Note: recommend to modify the HBase partition number by the actual/env data amount & RS amount before init store
-# it may influence the loading speed a lot
-#hbase.enable_partition=true
-#hbase.vertex_partitions=10
-#hbase.edge_partitions=30
-
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
+
5.5 HBase
用户需自行安装 HBase,要求版本 2.0 以上,下载地址
修改 hugegraph.properties
backend=hbase
+serializer=hbase
+
+# hbase backend config
+hbase.hosts=localhost
+hbase.port=2181
+# Note: recommend to modify the HBase partition number by the actual/env data amount & RS amount before init store
+# it may influence the loading speed a lot
+#hbase.enable_partition=true
+#hbase.vertex_partitions=10
+#hbase.edge_partitions=30
+
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
bin/init-store.sh
启动server
bin/start-hugegraph.sh
Starting HugeGraphServer...
@@ -130,11 +130,11 @@
curl
请求RESTfulAPI
echo `curl -o /dev/null -s -w %{http_code} "http://localhost:8080/graphs/hugegraph/graph/vertices"`
返回结果200,代表server启动正常
6.2 请求Server
HugeGraphServer的RESTful API包括多种类型的资源,典型的包括graph、schema、gremlin、traverser和task,
graph
包含vertices
、edges
schema
包含vertexlabels
、 propertykeys
、 edgelabels
、indexlabels
gremlin
包含各种Gremlin
语句,如g.v()
,可以同步或者异步执行traverser
包含各种高级查询,包括最短路径、交叉点、N步可达邻居等task
包含异步任务的查询和删除
6.2.1 获取hugegraph
的顶点及相关属性
curl http://localhost:8080/graphs/hugegraph/graph/vertices
说明
由于图的点和边很多,对于 list 型的请求,比如获取所有顶点,获取所有边等,Server 会将数据压缩再返回,
-所以使用 curl 时得到一堆乱码,可以重定向至 gunzip
进行解压。推荐使用 Chrome 浏览器 + Restlet 插件发送 HTTP 请求进行测试。
curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
-
当前HugeGraphServer的默认配置只能是本机访问,可以修改配置,使其能在其他机器访问。
vim conf/rest-server.properties
-
-restserver.url=http://0.0.0.0:8080
-
响应体如下:
{
+所以使用 curl 时得到一堆乱码,可以重定向至 gunzip
进行解压。推荐使用 Chrome 浏览器 + Restlet 插件发送 HTTP 请求进行测试。curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
+
当前HugeGraphServer的默认配置只能是本机访问,可以修改配置,使其能在其他机器访问。
vim conf/rest-server.properties
+
+restserver.url=http://0.0.0.0:8080
+
响应体如下:
{
"vertices": [
{
"id": "2lop",
@@ -183,18 +183,18 @@
...
]
}
-
详细的API请参考RESTful-API文档
7 停止Server
$cd hugegraph-${version}
+
详细的API请参考RESTful-API文档
7 停止Server
$cd hugegraph-${version}
$bin/stop-hugegraph.sh
-
3.2 - HugeGraph-Loader Quick Start
1 HugeGraph-Loader概述
HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
目前支持的数据源包括:
- 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
- HDFS 文件或目录,支持压缩文件
- 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server
本地磁盘文件和 HDFS 文件支持断点续传。
后面会具体说明。
注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start
2 获取 HugeGraph-Loader
有两种方式可以获取 HugeGraph-Loader:
- 下载已编译的压缩包
- 克隆源码编译安装
2.1 下载已编译的压缩包
下载最新版本的 HugeGraph-Toolchain Release 包, 里面包含了 loader + tool + hubble 全套工具, 如果你已经下载, 可跳过重复步骤
wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
+
3.2 - HugeGraph-Loader Quick Start
1 HugeGraph-Loader概述
HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
目前支持的数据源包括:
- 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
- HDFS 文件或目录,支持压缩文件
- 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server
本地磁盘文件和 HDFS 文件支持断点续传。
后面会具体说明。
注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start
2 获取 HugeGraph-Loader
有两种方式可以获取 HugeGraph-Loader:
- 下载已编译的压缩包
- 克隆源码编译安装
2.1 下载已编译的压缩包
下载最新版本的 HugeGraph-Toolchain Release 包, 里面包含了 loader + tool + hubble 全套工具, 如果你已经下载, 可跳过重复步骤
wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
2.2 克隆源码编译安装
克隆最新版本的 HugeGraph-Loader 源码包:
# 1. get from github
git clone https://github.com/apache/hugegraph-toolchain.git
# 2. get from direct (e.g. here is 1.0.0, please choose the latest version)
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
由于Oracle ojdbc license的限制,需要手动安装ojdbc到本地maven仓库。
-访问Oracle jdbc 下载 页面。选择Oracle Database 12c Release 2 (12.2.0.1) drivers,如下图所示。
打开链接后,选择“ojdbc8.jar”, 如下图所示。
把ojdbc8安装到本地maven仓库,进入ojdbc8.jar
所在目录,执行以下命令。
mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
-
编译生成 tar 包:
cd hugegraph-loader
+访问Oracle jdbc 下载 页面。选择Oracle Database 12c Release 2 (12.2.0.1) drivers,如下图所示。打开链接后,选择“ojdbc8.jar”, 如下图所示。
把ojdbc8安装到本地maven仓库,进入ojdbc8.jar
所在目录,执行以下命令。
mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
+
编译生成 tar 包:
cd hugegraph-loader
mvn clean package -DskipTests
3 使用流程
使用 HugeGraph-Loader 的基本流程分为以下几步:
- 编写图模型
- 准备数据文件
- 编写输入源映射文件
- 执行命令导入
3.1 编写图模型
这一步是建模的过程,用户需要对自己已有的数据和想要创建的图模型有一个清晰的构想,然后编写 schema 建立图模型。
比如想创建一个拥有两类顶点及两类边的图,顶点是"人"和"软件",边是"人认识人"和"人创造软件",并且这些顶点和边都带有一些属性,比如顶点"人"有:“姓名”、“年龄"等属性,
“软件"有:“名字”、“售卖价格"等属性;边"认识"有: “日期"属性等。
示例图模型
在设计好了图模型之后,我们可以用groovy
编写出schema
的定义,并保存至文件中,这里命名为schema.groovy
。
// 创建一些属性
@@ -213,25 +213,25 @@
schema.edgeLabel("knows").sourceLabel("person").targetLabel("person").ifNotExist().create();
// 创建 created 边类型,这类边是从 person 指向 software 的
schema.edgeLabel("created").sourceLabel("person").targetLabel("software").ifNotExist().create();
-
关于 schema 的详细说明请参考 hugegraph-client 中对应部分。
3.2 准备数据
目前 HugeGraph-Loader 支持的数据源包括:
- 本地磁盘文件或目录
- HDFS 文件或目录
- 部分关系型数据库
3.2.1 数据源结构
3.2.1.1 本地磁盘文件或目录
用户可以指定本地磁盘文件作为数据源,如果数据分散在多个文件中,也支持以某个目录作为数据源,但暂时不支持以多个目录作为数据源。
比如:我的数据分散在多个文件中,part-0、part-1 … part-n,要想执行导入,必须保证它们是放在一个目录下的。然后在 loader 的映射文件中,将path
指定为该目录即可。
支持的文件格式包括:
- TEXT
- CSV
- JSON
TEXT 是自定义分隔符的文本文件,第一行通常是标题,记录了每一列的名称,也允许没有标题行(在映射文件中指定)。其余的每行代表一条记录,会被转化为一个顶点/边;行的每一列对应一个字段,会被转化为顶点/边的 id、label 或属性;
示例如下:
id|name|lang|price|ISBN
-1|lop|java|328|ISBN978-7-107-18618-5
-2|ripple|java|199|ISBN978-7-100-13678-5
-
CSV 是分隔符为逗号,
的 TEXT 文件,当列值本身包含逗号时,该列值需要用双引号包起来,如:
marko,29,Beijing
-"li,nary",26,"Wu,han"
-
JSON 文件要求每一行都是一个 JSON 串,且每行的格式需保持一致。
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
+
关于 schema 的详细说明请参考 hugegraph-client 中对应部分。
3.2 准备数据
目前 HugeGraph-Loader 支持的数据源包括:
- 本地磁盘文件或目录
- HDFS 文件或目录
- 部分关系型数据库
3.2.1 数据源结构
3.2.1.1 本地磁盘文件或目录
用户可以指定本地磁盘文件作为数据源,如果数据分散在多个文件中,也支持以某个目录作为数据源,但暂时不支持以多个目录作为数据源。
比如:我的数据分散在多个文件中,part-0、part-1 … part-n,要想执行导入,必须保证它们是放在一个目录下的。然后在 loader 的映射文件中,将path
指定为该目录即可。
支持的文件格式包括:
- TEXT
- CSV
- JSON
TEXT 是自定义分隔符的文本文件,第一行通常是标题,记录了每一列的名称,也允许没有标题行(在映射文件中指定)。其余的每行代表一条记录,会被转化为一个顶点/边;行的每一列对应一个字段,会被转化为顶点/边的 id、label 或属性;
示例如下:
id|name|lang|price|ISBN
+1|lop|java|328|ISBN978-7-107-18618-5
+2|ripple|java|199|ISBN978-7-100-13678-5
+
CSV 是分隔符为逗号,
的 TEXT 文件,当列值本身包含逗号时,该列值需要用双引号包起来,如:
marko,29,Beijing
+"li,nary",26,"Wu,han"
+
JSON 文件要求每一行都是一个 JSON 串,且每行的格式需保持一致。
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
{"source_name": "marko", "target_name": "josh", "date": "20130220", "weight": 1.0}
-
3.2.1.2 HDFS 文件或目录
用户也可以指定 HDFS 文件或目录作为数据源,上面关于本地磁盘文件或目录
的要求全部适用于这里。除此之外,鉴于 HDFS 上通常存储的都是压缩文件,loader 也提供了对压缩文件的支持,并且本地磁盘文件或目录
同样支持压缩文件。
目前支持的压缩文件类型包括:GZIP、BZ2、XZ、LZMA、SNAPPY_RAW、SNAPPY_FRAMED、Z、DEFLATE、LZ4_BLOCK、LZ4_FRAMED、ORC 和 PARQUET。
3.2.1.3 主流关系型数据库
loader 还支持以部分关系型数据库作为数据源,目前支持 MySQL、PostgreSQL、Oracle 和 SQL Server。
但目前对表结构要求较为严格,如果导入过程中需要做关联查询,这样的表结构是不允许的。关联查询的意思是:在读到表的某行后,发现某列的值不能直接使用(比如外键),需要再去做一次查询才能确定该列的真实值。
举个例子:假设有三张表,person、software 和 created
// person 表结构
-id | name | age | city
-
// software 表结构
-id | name | lang | price
-
// created 表结构
-id | p_id | s_id | date
-
如果在建模(schema)时指定 person 或 software 的 id 策略是 PRIMARY_KEY,选择以 name 作为 primary keys(注意:这是 hugegraph 中 vertexlabel 的概念),在导入边数据时,由于需要拼接出源顶点和目标顶点的 id,必须拿着 p_id/s_id 去 person/software 表中查到对应的 name,这种需要做额外查询的表结构的情况,loader 暂时是不支持的。这时可以采用以下两种方式替代:
- 仍然指定 person 和 software 的 id 策略为 PRIMARY_KEY,但是以 person 表和 software 表的 id 列作为顶点的主键属性,这样导入边时直接使用 p_id 和 s_id 和顶点的 label 拼接就能生成 id 了;
- 指定 person 和 software 的 id 策略为 CUSTOMIZE,然后直接以 person 表和 software 表的 id 列作为顶点 id,这样导入边时直接使用 p_id 和 s_id 即可;
关键点就是要让边能直接使用 p_id 和 s_id,不要再去查一次。
3.2.2 准备顶点和边数据
3.2.2.1 顶点数据
顶点数据文件由一行一行的数据组成,一般每一行作为一个顶点,每一列会作为顶点属性。下面以 CSV 格式作为示例进行说明。
- person 顶点数据(数据本身不包含 header)
Tom,48,Beijing
-Jerry,36,Shanghai
-
- software 顶点数据(数据本身包含 header)
name,price
-Photoshop,999
-Office,388
-
3.2.2.2 边数据
边数据文件由一行一行的数据组成,一般每一行作为一条边,其中有部分列会作为源顶点和目标顶点的 id,其他列作为边属性。下面以 JSON 格式作为示例进行说明。
- knows 边数据
{"source_name": "Tom", "target_name": "Jerry", "date": "2008-12-12"}
+
3.2.1.2 HDFS 文件或目录
用户也可以指定 HDFS 文件或目录作为数据源,上面关于本地磁盘文件或目录
的要求全部适用于这里。除此之外,鉴于 HDFS 上通常存储的都是压缩文件,loader 也提供了对压缩文件的支持,并且本地磁盘文件或目录
同样支持压缩文件。
目前支持的压缩文件类型包括:GZIP、BZ2、XZ、LZMA、SNAPPY_RAW、SNAPPY_FRAMED、Z、DEFLATE、LZ4_BLOCK、LZ4_FRAMED、ORC 和 PARQUET。
3.2.1.3 主流关系型数据库
loader 还支持以部分关系型数据库作为数据源,目前支持 MySQL、PostgreSQL、Oracle 和 SQL Server。
但目前对表结构要求较为严格,如果导入过程中需要做关联查询,这样的表结构是不允许的。关联查询的意思是:在读到表的某行后,发现某列的值不能直接使用(比如外键),需要再去做一次查询才能确定该列的真实值。
举个例子:假设有三张表,person、software 和 created
// person 表结构
+id | name | age | city
+
// software 表结构
+id | name | lang | price
+
// created 表结构
+id | p_id | s_id | date
+
如果在建模(schema)时指定 person 或 software 的 id 策略是 PRIMARY_KEY,选择以 name 作为 primary keys(注意:这是 hugegraph 中 vertexlabel 的概念),在导入边数据时,由于需要拼接出源顶点和目标顶点的 id,必须拿着 p_id/s_id 去 person/software 表中查到对应的 name,这种需要做额外查询的表结构的情况,loader 暂时是不支持的。这时可以采用以下两种方式替代:
- 仍然指定 person 和 software 的 id 策略为 PRIMARY_KEY,但是以 person 表和 software 表的 id 列作为顶点的主键属性,这样导入边时直接使用 p_id 和 s_id 和顶点的 label 拼接就能生成 id 了;
- 指定 person 和 software 的 id 策略为 CUSTOMIZE,然后直接以 person 表和 software 表的 id 列作为顶点 id,这样导入边时直接使用 p_id 和 s_id 即可;
关键点就是要让边能直接使用 p_id 和 s_id,不要再去查一次。
3.2.2 准备顶点和边数据
3.2.2.1 顶点数据
顶点数据文件由一行一行的数据组成,一般每一行作为一个顶点,每一列会作为顶点属性。下面以 CSV 格式作为示例进行说明。
- person 顶点数据(数据本身不包含 header)
Tom,48,Beijing
+Jerry,36,Shanghai
+
- software 顶点数据(数据本身包含 header)
name,price
+Photoshop,999
+Office,388
+
3.2.2.2 边数据
边数据文件由一行一行的数据组成,一般每一行作为一条边,其中有部分列会作为源顶点和目标顶点的 id,其他列作为边属性。下面以 JSON 格式作为示例进行说明。
- knows 边数据
{"source_name": "Tom", "target_name": "Jerry", "date": "2008-12-12"}
- created 边数据
{"source_name": "Tom", "target_name": "Photoshop"}
{"source_name": "Tom", "target_name": "Office"}
{"source_name": "Jerry", "target_name": "Office"}
@@ -544,21 +544,22 @@
当然如果修改后的数据行仍然有问题,则会被再次记录到失败文件中(不用担心会有重复行)。每个顶点映射或边映射有数据插入失败时都会产生自己的失败文件,失败文件又分为解析失败文件(后缀 .parse-error)和插入失败文件(后缀 .insert-error),
它们被保存在 ${struct}/current
目录下。比如映射文件中有一个顶点映射 person 和边映射 knows,它们各有一些错误行,当 Loader 退出后,在
${struct}/current
目录下会看到如下文件:
- person-b4cd32ab.parse-error: 顶点映射 person 解析错误的数据
- person-b4cd32ab.insert-error: 顶点映射 person 插入错误的数据
- knows-eb6b2bac.parse-error: 边映射 knows 解析错误的数据
- knows-eb6b2bac.insert-error: 边映射 knows 插入错误的数据
.parse-error 和 .insert-error 并不总是一起存在的,只有存在解析出错的行才会有 .parse-error 文件,只有存在插入出错的行才会有 .insert-error 文件。
3.4.3 logs 目录文件说明
程序执行过程中各日志及错误数据会写入 hugegraph-loader.log 文件中。
3.4.4 执行命令
运行 bin/hugegraph-loader 并传入参数
bin/hugegraph-loader -g {GRAPH_NAME} -f ${INPUT_DESC_FILE} -s ${SCHEMA_FILE} -h {HOST} -p {PORT}
-
4 完整示例
下面给出的是 hugegraph-loader 包中 example 目录下的例子。
4.1 准备数据
顶点文件:example/file/vertex_person.csv
marko,29,Beijing
-vadas,27,Hongkong
-josh,32,Beijing
-peter,35,Shanghai
-"li,nary",26,"Wu,han"
-
顶点文件:example/file/vertex_software.txt
name|lang|price
-lop|java|328
-ripple|java|199
-
边文件:example/file/edge_knows.json
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
-{"source_name": "marko", "target_name": "josh", "date": "20130220", "weight": 1.0}
-
边文件:example/file/edge_created.json
{"aname": "marko", "bname": "lop", "date": "20171210", "weight": 0.4}
-{"aname": "josh", "bname": "lop", "date": "20091111", "weight": 0.4}
-{"aname": "josh", "bname": "ripple", "date": "20171210", "weight": 1.0}
-{"aname": "peter", "bname": "lop", "date": "20170324", "weight": 0.2}
-
4.2 编写schema
schema文件:example/file/schema.groovy
schema.propertyKey("name").asText().ifNotExist().create();
+
4 完整示例
下面给出的是 hugegraph-loader 包中 example 目录下的例子。(GitHub 地址)
4.1 准备数据
顶点文件:example/file/vertex_person.csv
marko,29,Beijing
+vadas,27,Hongkong
+josh,32,Beijing
+peter,35,Shanghai
+"li,nary",26,"Wu,han"
+tom,null,NULL
+
顶点文件:example/file/vertex_software.txt
id|name|lang|price|ISBN
+1|lop|java|328|ISBN978-7-107-18618-5
+2|ripple|java|199|ISBN978-7-100-13678-5
+
边文件:example/file/edge_knows.json
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
+{"source_name": "marko", "target_name": "josh", "date": "20130220", "weight": 1.0}
+
边文件:example/file/edge_created.json
{"aname": "marko", "bname": "lop", "date": "20171210", "weight": 0.4}
+{"aname": "josh", "bname": "lop", "date": "20091111", "weight": 0.4}
+{"aname": "josh", "bname": "ripple", "date": "20171210", "weight": 1.0}
+{"aname": "peter", "bname": "lop", "date": "20170324", "weight": 0.2}
+
4.2 编写schema
schema文件:example/file/schema.groovy
schema.propertyKey("name").asText().ifNotExist().create();
schema.propertyKey("age").asInt().ifNotExist().create();
schema.propertyKey("city").asText().ifNotExist().create();
schema.propertyKey("weight").asDouble().ifNotExist().create();
@@ -569,7 +570,6 @@
schema.vertexLabel("person").properties("name", "age", "city").primaryKeys("name").ifNotExist().create();
schema.vertexLabel("software").properties("name", "lang", "price").primaryKeys("name").ifNotExist().create();
-schema.indexLabel("personByName").onV("person").by("name").secondary().ifNotExist().create();
schema.indexLabel("personByAge").onV("person").by("age").range().ifNotExist().create();
schema.indexLabel("personByCity").onV("person").by("city").secondary().ifNotExist().create();
schema.indexLabel("personByAgeAndCity").onV("person").by("age", "city").secondary().ifNotExist().create();
@@ -587,26 +587,27 @@
"label": "person",
"input": {
"type": "file",
- "path": "example/vertex_person.csv",
+ "path": "example/file/vertex_person.csv",
"format": "CSV",
"header": ["name", "age", "city"],
- "charset": "UTF-8"
+ "charset": "UTF-8",
+ "skipped_line": {
+ "regex": "(^#|^//).*"
+ }
},
- "mapping": {
- "name": "name",
- "age": "age",
- "city": "city"
- }
+ "null_values": ["NULL", "null", ""]
},
{
"label": "software",
"input": {
"type": "file",
- "path": "example/vertex_software.text",
+ "path": "example/file/vertex_software.txt",
"format": "TEXT",
"delimiter": "|",
"charset": "GBK"
- }
+ },
+ "id": "id",
+ "ignored": ["ISBN"]
}
],
"edges": [
@@ -616,70 +617,71 @@
"target": ["target_name"],
"input": {
"type": "file",
- "path": "example/edge_knows.json",
- "format": "JSON"
+ "path": "example/file/edge_knows.json",
+ "format": "JSON",
+ "date_format": "yyyyMMdd"
},
- "mapping": {
+ "field_mapping": {
"source_name": "name",
"target_name": "name"
}
},
{
"label": "created",
- "source": ["aname"],
- "target": ["bname"],
+ "source": ["source_name"],
+ "target": ["target_id"],
"input": {
"type": "file",
- "path": "example/edge_created.json",
- "format": "JSON"
+ "path": "example/file/edge_created.json",
+ "format": "JSON",
+ "date_format": "yyyy-MM-dd"
},
- "mapping": {
- "aname": "name",
- "bname": "name"
+ "field_mapping": {
+ "source_name": "name"
}
}
]
}
4.4 执行命令导入
sh bin/hugegraph-loader.sh -g hugegraph -f example/file/struct.json -s example/file/schema.groovy
-
导入结束后,会出现类似如下统计信息:
vertices/edges has been loaded this time : 8/6
---------------------------------------------------
-count metrics
- input read success : 14
- input read failure : 0
- vertex parse success : 8
- vertex parse failure : 0
- vertex insert success : 8
- vertex insert failure : 0
- edge parse success : 6
- edge parse failure : 0
- edge insert success : 6
- edge insert failure : 0
-
4.5 使用 spark-loader 导入
Spark 版本:Spark 3+, 其他版本未测试。
+
导入结束后,会出现类似如下统计信息:
vertices/edges has been loaded this time : 8/6
+--------------------------------------------------
+count metrics
+ input read success : 14
+ input read failure : 0
+ vertex parse success : 8
+ vertex parse failure : 0
+ vertex insert success : 8
+ vertex insert failure : 0
+ edge parse success : 6
+ edge parse failure : 0
+ edge insert success : 6
+ edge insert failure : 0
+
4.5 使用 spark-loader 导入
Spark 版本:Spark 3+, 其他版本未测试。
HugeGraph Toolchain 版本: toolchain-1.0.0
spark-loader
的参数分为两部分,注意:因二者参数名缩写存在重合部分,请使用参数全称。两种参数之间无需保证先后顺序。
- hugegraph 参数(参考:hugegraph-loader 参数说明 )
- Spark 任务提交参数 (参考:Submitting Applications)
示例:
sh bin/hugegraph-spark-loader.sh --master yarn \
--deploy-mode cluster --name spark-hugegraph-loader --file ./hugegraph.json \
--username admin --token admin --host xx.xx.xx.xx --port 8093 \
--graph graph-test --num-executors 6 --executor-cores 16 --executor-memory 15g
-
3.3 - HugeGraph-Tools Quick Start
1 HugeGraph-Tools概述
HugeGraph-Tools 是 HugeGraph 的自动化部署、管理和备份/还原组件。
2 获取 HugeGraph-Tools
有两种方式可以获取 HugeGraph-Tools:(它被包含子 Toolchain 中)
- 下载二进制tar包
- 下载源码编译安装
2.1 下载二进制tar包
下载最新版本的 HugeGraph-Toolchain 包, 然后进入 tools 子目录
wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
+
3.3 - HugeGraph-Tools Quick Start
1 HugeGraph-Tools概述
HugeGraph-Tools 是 HugeGraph 的自动化部署、管理和备份/还原组件。
2 获取 HugeGraph-Tools
有两种方式可以获取 HugeGraph-Tools:(它被包含子 Toolchain 中)
- 下载二进制tar包
- 下载源码编译安装
2.1 下载二进制tar包
下载最新版本的 HugeGraph-Toolchain 包, 然后进入 tools 子目录
wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
2.2 下载源码编译安装
源码编译前请确保安装了wget命令
下载最新版本的 HugeGraph-Toolchain 源码包, 然后根目录编译或者单独编译 tool 子模块:
# 1. get from github
git clone https://github.com/apache/hugegraph-toolchain.git
# 2. get from direct (e.g. here is 1.0.0, please choose the latest version)
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
编译生成 tar 包:
cd hugegraph-tools
mvn package -DskipTests
生成 tar 包 hugegraph-tools-${version}.tar.gz
3 使用
3.1 功能概览
解压后,进入 hugegraph-tools 目录,可以使用bin/hugegraph
或者bin/hugegraph help
来查看 usage 信息。主要分为:
- 图管理类,graph-mode-set、graph-mode-get、graph-list、graph-get 和 graph-clear
- 异步任务管理类,task-list、task-get、task-delete、task-cancel 和 task-clear
- Gremlin类,gremlin-execute 和 gremlin-schedule
- 备份/恢复类,backup、restore、migrate、schedule-backup 和 dump
- 安装部署类,deploy、clear、start-all 和 stop-all
Usage: hugegraph [options] [command] [command options]
-
3.2 [options]-全局变量
options
是 HugeGraph-Tools 的全局变量,可以在 hugegraph-tools/bin/hugegraph 中配置,包括:
- –graph,HugeGraph-Tools 操作的图的名字,默认值是 hugegraph
- –url,HugeGraph-Server 的服务地址,默认是 http://127.0.0.1:8080
- –user,当 HugeGraph-Server 开启认证时,传递用户名
- –password,当 HugeGraph-Server 开启认证时,传递用户的密码
- –timeout,连接 HugeGraph-Server 时的超时时间,默认是 30s
- –trust-store-file,证书文件的路径,当 –url 使用 https 时,HugeGraph-Client 使用的 truststore 文件,默认为空,代表使用 hugegraph-tools 内置的 truststore 文件 conf/hugegraph.truststore
- –trust-store-password,证书文件的密码,当 –url 使用 https 时,HugeGraph-Client 使用的 truststore 的密码,默认为空,代表使用 hugegraph-tools 内置的 truststore 文件的密码
上述全局变量,也可以通过环境变量来设置。一种方式是在命令行使用 export 设置临时环境变量,在该命令行关闭之前均有效
全局变量 环境变量 示例 –url HUGEGRAPH_URL export HUGEGRAPH_URL=http://127.0.0.1:8080 –graph HUGEGRAPH_GRAPH export HUGEGRAPH_GRAPH=hugegraph –user HUGEGRAPH_USERNAME export HUGEGRAPH_USERNAME=admin –password HUGEGRAPH_PASSWORD export HUGEGRAPH_PASSWORD=test –timeout HUGEGRAPH_TIMEOUT export HUGEGRAPH_TIMEOUT=30 –trust-store-file HUGEGRAPH_TRUST_STORE_FILE export HUGEGRAPH_TRUST_STORE_FILE=/tmp/trust-store –trust-store-password HUGEGRAPH_TRUST_STORE_PASSWORD export HUGEGRAPH_TRUST_STORE_PASSWORD=xxxx
另一种方式是在 bin/hugegraph 脚本中设置环境变量:
#!/bin/bash
-
-# Set environment here if needed
-#export HUGEGRAPH_URL=
-#export HUGEGRAPH_GRAPH=
-#export HUGEGRAPH_USERNAME=
-#export HUGEGRAPH_PASSWORD=
-#export HUGEGRAPH_TIMEOUT=
-#export HUGEGRAPH_TRUST_STORE_FILE=
-#export HUGEGRAPH_TRUST_STORE_PASSWORD=
-
3.3 图管理类,graph-mode-set、graph-mode-get、graph-list、graph-get和graph-clear
- graph-mode-set,设置图的 restore mode
- –graph-mode 或者 -m,必填项,指定将要设置的模式,合法值包括 [NONE, RESTORING, MERGING, LOADING]
- graph-mode-get,获取图的 restore mode
- graph-list,列出某个 HugeGraph-Server 中全部的图
- graph-get,获取某个图及其存储后端类型
- graph-clear,清除某个图的全部 schema 和 data
- –confirm-message 或者 -c,必填项,删除确认信息,需要手动输入,二次确认防止误删,“I’m sure to delete all data”,包括双引号
当需要把备份的图原样恢复到一个新的图中的时候,需要先将图模式设置为 RESTORING 模式;当需要将备份的图合并到已存在的图中时,需要先将图模式设置为 MERGING 模式。
3.4 异步任务管理类,task-list、task-get和task-delete
- task-list,列出某个图中的异步任务,可以根据任务的状态过滤
- –status,选填项,指定要查看的任务的状态,即按状态过滤任务
- –limit,选填项,指定要获取的任务的数目,默认为 -1,意思为获取全部符合条件的任务
- task-get,获取某个异步任务的详细信息
- –task-id,必填项,指定异步任务的 ID
- task-delete,删除某个异步任务的信息
- –task-id,必填项,指定异步任务的 ID
- task-cancel,取消某个异步任务的执行
- –task-id,要取消的异步任务的 ID
- task-clear,清理完成的异步任务
- –force,选填项,设置时,表示清理全部异步任务,未执行完成的先取消,然后清除所有异步任务。默认只清理已完成的异步任务
3.5 Gremlin类,gremlin-execute和gremlin-schedule
- gremlin-execute,发送 Gremlin 语句到 HugeGraph-Server 来执行查询或修改操作,同步执行,结束后返回结果
- –file 或者 -f,指定要执行的脚本文件,UTF-8编码,与 –script 互斥
- –script 或者 -s,指定要执行的脚本字符串,与 –file 互斥
- –aliases 或者 -a,Gremlin 别名设置,格式为:key1=value1,key2=value2,…
- –bindings 或者 -b,Gremlin 绑定设置,格式为:key1=value1,key2=value2,…
- –language 或者 -l,Gremlin 脚本的语言,默认为 gremlin-groovy
–file 和 –script 二者互斥,必须设置其中之一
- gremlin-schedule,发送 Gremlin 语句到 HugeGraph-Server 来执行查询或修改操作,异步执行,任务提交后立刻返回异步任务id
- –file 或者 -f,指定要执行的脚本文件,UTF-8编码,与 –script 互斥
- –script 或者 -s,指定要执行的脚本字符串,与 –file 互斥
- –bindings 或者 -b,Gremlin 绑定设置,格式为:key1=value1,key2=value2,…
- –language 或者 -l,Gremlin 脚本的语言,默认为 gremlin-groovy
–file 和 –script 二者互斥,必须设置其中之一
3.6 备份/恢复类
- backup,将某张图中的 schema 或者 data 备份到 HugeGraph 系统之外,以 JSON 形式存在本地磁盘或者 HDFS
- –format,备份的格式,可选值包括 [json, text],默认为 json
- –all-properties,是否备份顶点/边全部的属性,仅在 –format 为 text 是有效,默认 false
- –label,要备份的顶点/边的类型,仅在 –format 为 text 是有效,只有备份顶点或者边的时候有效
- –properties,要备份的顶点/边的属性,逗号分隔,仅在 –format 为 text 是有效,只有备份顶点或者边的时候有效
- –compress,备份时是否压缩数据,默认为 true
- –directory 或者 -d,存储 schema 或者 data 的目录,本地目录时,默认为’./{graphName}’,HDFS 时,默认为 ‘{fs.default.name}/{graphName}’
- –huge-types 或者 -t,要备份的数据类型,逗号分隔,可选值为 ‘all’ 或者 一个或多个 [vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,‘all’ 代表全部6种类型,即顶点、边和所有schema
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- –split-size 或者 -s,指定在备份时对顶点或者边分块的大小,默认为 1048576
- -D,用 -Dkey=value 的模式指定动态参数,用来备份数据到 HDFS 时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
- restore,将 JSON 格式存储的 schema 或者 data 恢复到一个新图中(RESTORING 模式)或者合并到已存在的图中(MERGING 模式)
- –directory 或者 -d,存储 schema 或者 data 的目录,本地目录时,默认为’./{graphName}’,HDFS 时,默认为 ‘{fs.default.name}/{graphName}’
- –clean,是否在恢复图完成后删除 –directory 指定的目录,默认为 false
- –huge-types 或者 -t,要恢复的数据类型,逗号分隔,可选值为 ‘all’ 或者 一个或多个 [vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,‘all’ 代表全部6种类型,即顶点、边和所有schema
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- -D,用 -Dkey=value 的模式指定动态参数,用来从 HDFS 恢复图时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
只有当 –format 为 json 执行 backup 时,才可以使用 restore 命令恢复
- migrate, 将当前连接的图迁移至另一个 HugeGraphServer 中
- –target-graph,目标图的名字,默认为 hugegraph
- –target-url,目标图所在的 HugeGraphServer,默认为 http://127.0.0.1:8081
- –target-username,访问目标图的用户名
- –target-password,访问目标图的密码
- –target-timeout,访问目标图的超时时间
- –target-trust-store-file,访问目标图使用的 truststore 文件
- –target-trust-store-password,访问目标图使用的 truststore 的密码
- –directory 或者 -d,迁移过程中,存储源图的 schema 或者 data 的目录,本地目录时,默认为’./{graphName}’,HDFS 时,默认为 ‘{fs.default.name}/{graphName}’
- –huge-types 或者 -t,要迁移的数据类型,逗号分隔,可选值为 ‘all’ 或者 一个或多个 [vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,‘all’ 代表全部6种类型,即顶点、边和所有schema
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- –split-size 或者 -s,指定迁移过程中对源图进行备份时顶点或者边分块的大小,默认为 1048576
- -D,用 -Dkey=value 的模式指定动态参数,用来在迁移图过程中需要备份数据到 HDFS 时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
- –graph-mode 或者 -m,将源图恢复到目标图时将目标图设置的模式,合法值包括 [RESTORING, MERGING]
- –keep-local-data,是否保留在迁移图的过程中产生的源图的备份,默认为 false,即默认迁移图结束后不保留产生的源图备份
- schedule-backup,周期性对图执行备份操作,并保留一定数目的最新备份(目前仅支持本地文件系统)
- –directory 或者 -d,必填项,指定备份数据的目录
- –backup-num,选填项,指定保存的最新的备份的数目,默认为 3
- –interval,选填项,指定进行备份的周期,格式同 Linux crontab 格式
- dump,把整张图的顶点和边全部导出,默认以
vertex vertex-edge1 vertex-edge2...
JSON格式存储。
+
3.2 [options]-全局变量
options
是 HugeGraph-Tools 的全局变量,可以在 hugegraph-tools/bin/hugegraph 中配置,包括:
- –graph,HugeGraph-Tools 操作的图的名字,默认值是 hugegraph
- –url,HugeGraph-Server 的服务地址,默认是 http://127.0.0.1:8080
- –user,当 HugeGraph-Server 开启认证时,传递用户名
- –password,当 HugeGraph-Server 开启认证时,传递用户的密码
- –timeout,连接 HugeGraph-Server 时的超时时间,默认是 30s
- –trust-store-file,证书文件的路径,当 –url 使用 https 时,HugeGraph-Client 使用的 truststore 文件,默认为空,代表使用 hugegraph-tools 内置的 truststore 文件 conf/hugegraph.truststore
- –trust-store-password,证书文件的密码,当 –url 使用 https 时,HugeGraph-Client 使用的 truststore 的密码,默认为空,代表使用 hugegraph-tools 内置的 truststore 文件的密码
上述全局变量,也可以通过环境变量来设置。一种方式是在命令行使用 export 设置临时环境变量,在该命令行关闭之前均有效
全局变量 环境变量 示例 –url HUGEGRAPH_URL export HUGEGRAPH_URL=http://127.0.0.1:8080 –graph HUGEGRAPH_GRAPH export HUGEGRAPH_GRAPH=hugegraph –user HUGEGRAPH_USERNAME export HUGEGRAPH_USERNAME=admin –password HUGEGRAPH_PASSWORD export HUGEGRAPH_PASSWORD=test –timeout HUGEGRAPH_TIMEOUT export HUGEGRAPH_TIMEOUT=30 –trust-store-file HUGEGRAPH_TRUST_STORE_FILE export HUGEGRAPH_TRUST_STORE_FILE=/tmp/trust-store –trust-store-password HUGEGRAPH_TRUST_STORE_PASSWORD export HUGEGRAPH_TRUST_STORE_PASSWORD=xxxx
另一种方式是在 bin/hugegraph 脚本中设置环境变量:
#!/bin/bash
+
+# Set environment here if needed
+#export HUGEGRAPH_URL=
+#export HUGEGRAPH_GRAPH=
+#export HUGEGRAPH_USERNAME=
+#export HUGEGRAPH_PASSWORD=
+#export HUGEGRAPH_TIMEOUT=
+#export HUGEGRAPH_TRUST_STORE_FILE=
+#export HUGEGRAPH_TRUST_STORE_PASSWORD=
+
3.3 图管理类,graph-mode-set、graph-mode-get、graph-list、graph-get和graph-clear
- graph-mode-set,设置图的 restore mode
- –graph-mode 或者 -m,必填项,指定将要设置的模式,合法值包括 [NONE, RESTORING, MERGING, LOADING]
- graph-mode-get,获取图的 restore mode
- graph-list,列出某个 HugeGraph-Server 中全部的图
- graph-get,获取某个图及其存储后端类型
- graph-clear,清除某个图的全部 schema 和 data
- –confirm-message 或者 -c,必填项,删除确认信息,需要手动输入,二次确认防止误删,“I’m sure to delete all data”,包括双引号
当需要把备份的图原样恢复到一个新的图中的时候,需要先将图模式设置为 RESTORING 模式;当需要将备份的图合并到已存在的图中时,需要先将图模式设置为 MERGING 模式。
3.4 异步任务管理类,task-list、task-get和task-delete
- task-list,列出某个图中的异步任务,可以根据任务的状态过滤
- –status,选填项,指定要查看的任务的状态,即按状态过滤任务
- –limit,选填项,指定要获取的任务的数目,默认为 -1,意思为获取全部符合条件的任务
- task-get,获取某个异步任务的详细信息
- –task-id,必填项,指定异步任务的 ID
- task-delete,删除某个异步任务的信息
- –task-id,必填项,指定异步任务的 ID
- task-cancel,取消某个异步任务的执行
- –task-id,要取消的异步任务的 ID
- task-clear,清理完成的异步任务
- –force,选填项,设置时,表示清理全部异步任务,未执行完成的先取消,然后清除所有异步任务。默认只清理已完成的异步任务
3.5 Gremlin类,gremlin-execute和gremlin-schedule
- gremlin-execute,发送 Gremlin 语句到 HugeGraph-Server 来执行查询或修改操作,同步执行,结束后返回结果
- –file 或者 -f,指定要执行的脚本文件,UTF-8编码,与 –script 互斥
- –script 或者 -s,指定要执行的脚本字符串,与 –file 互斥
- –aliases 或者 -a,Gremlin 别名设置,格式为:key1=value1,key2=value2,…
- –bindings 或者 -b,Gremlin 绑定设置,格式为:key1=value1,key2=value2,…
- –language 或者 -l,Gremlin 脚本的语言,默认为 gremlin-groovy
–file 和 –script 二者互斥,必须设置其中之一
- gremlin-schedule,发送 Gremlin 语句到 HugeGraph-Server 来执行查询或修改操作,异步执行,任务提交后立刻返回异步任务id
- –file 或者 -f,指定要执行的脚本文件,UTF-8编码,与 –script 互斥
- –script 或者 -s,指定要执行的脚本字符串,与 –file 互斥
- –bindings 或者 -b,Gremlin 绑定设置,格式为:key1=value1,key2=value2,…
- –language 或者 -l,Gremlin 脚本的语言,默认为 gremlin-groovy
–file 和 –script 二者互斥,必须设置其中之一
3.6 备份/恢复类
- backup,将某张图中的 schema 或者 data 备份到 HugeGraph 系统之外,以 JSON 形式存在本地磁盘或者 HDFS
- –format,备份的格式,可选值包括 [json, text],默认为 json
- –all-properties,是否备份顶点/边全部的属性,仅在 –format 为 text 是有效,默认 false
- –label,要备份的顶点/边的类型,仅在 –format 为 text 是有效,只有备份顶点或者边的时候有效
- –properties,要备份的顶点/边的属性,逗号分隔,仅在 –format 为 text 是有效,只有备份顶点或者边的时候有效
- –compress,备份时是否压缩数据,默认为 true
- –directory 或者 -d,存储 schema 或者 data 的目录,本地目录时,默认为’./{graphName}’,HDFS 时,默认为 ‘{fs.default.name}/{graphName}’
- –huge-types 或者 -t,要备份的数据类型,逗号分隔,可选值为 ‘all’ 或者 一个或多个 [vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,‘all’ 代表全部6种类型,即顶点、边和所有schema
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- –split-size 或者 -s,指定在备份时对顶点或者边分块的大小,默认为 1048576
- -D,用 -Dkey=value 的模式指定动态参数,用来备份数据到 HDFS 时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
- restore,将 JSON 格式存储的 schema 或者 data 恢复到一个新图中(RESTORING 模式)或者合并到已存在的图中(MERGING 模式)
- –directory 或者 -d,存储 schema 或者 data 的目录,本地目录时,默认为’./{graphName}’,HDFS 时,默认为 ‘{fs.default.name}/{graphName}’
- –clean,是否在恢复图完成后删除 –directory 指定的目录,默认为 false
- –huge-types 或者 -t,要恢复的数据类型,逗号分隔,可选值为 ‘all’ 或者 一个或多个 [vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,‘all’ 代表全部6种类型,即顶点、边和所有schema
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- -D,用 -Dkey=value 的模式指定动态参数,用来从 HDFS 恢复图时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
只有当 –format 为 json 执行 backup 时,才可以使用 restore 命令恢复
- migrate, 将当前连接的图迁移至另一个 HugeGraphServer 中
- –target-graph,目标图的名字,默认为 hugegraph
- –target-url,目标图所在的 HugeGraphServer,默认为 http://127.0.0.1:8081
- –target-username,访问目标图的用户名
- –target-password,访问目标图的密码
- –target-timeout,访问目标图的超时时间
- –target-trust-store-file,访问目标图使用的 truststore 文件
- –target-trust-store-password,访问目标图使用的 truststore 的密码
- –directory 或者 -d,迁移过程中,存储源图的 schema 或者 data 的目录,本地目录时,默认为’./{graphName}’,HDFS 时,默认为 ‘{fs.default.name}/{graphName}’
- –huge-types 或者 -t,要迁移的数据类型,逗号分隔,可选值为 ‘all’ 或者 一个或多个 [vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,‘all’ 代表全部6种类型,即顶点、边和所有schema
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- –split-size 或者 -s,指定迁移过程中对源图进行备份时顶点或者边分块的大小,默认为 1048576
- -D,用 -Dkey=value 的模式指定动态参数,用来在迁移图过程中需要备份数据到 HDFS 时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
- –graph-mode 或者 -m,将源图恢复到目标图时将目标图设置的模式,合法值包括 [RESTORING, MERGING]
- –keep-local-data,是否保留在迁移图的过程中产生的源图的备份,默认为 false,即默认迁移图结束后不保留产生的源图备份
- schedule-backup,周期性对图执行备份操作,并保留一定数目的最新备份(目前仅支持本地文件系统)
- –directory 或者 -d,必填项,指定备份数据的目录
- –backup-num,选填项,指定保存的最新的备份的数目,默认为 3
- –interval,选填项,指定进行备份的周期,格式同 Linux crontab 格式
- dump,把整张图的顶点和边全部导出,默认以
vertex vertex-edge1 vertex-edge2...
JSON格式存储。
用户也可以自定义存储格式,只需要在hugegraph-tools/src/main/java/com/baidu/hugegraph/formatter
目录下实现一个继承自Formatter
的类,例如CustomFormatter
,使用时指定该类为formatter即可,例如
bin/hugegraph dump -f CustomFormatter
- –formatter 或者 -f,指定使用的 formatter,默认为 JsonFormatter
- –directory 或者 -d,存储 schema 或者 data 的目录,默认为当前目录
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- –split-size 或者 -s,指定在备份时对顶点或者边分块的大小,默认为 1048576
- -D,用 -Dkey=value 的模式指定动态参数,用来备份数据到 HDFS 时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
3.7 安装部署类
- deploy,一键下载、安装和启动 HugeGraph-Server 和 HugeGraph-Studio
- -v,必填项,指明安装的 HugeGraph-Server 和 HugeGraph-Studio 的版本号,最新的是 0.9
- -p,必填项,指定安装的 HugeGraph-Server 和 HugeGraph-Studio 目录
- -u,选填项,指定下载 HugeGraph-Server 和 HugeGraph-Studio 压缩包的链接
- clear,清理 HugeGraph-Server 和 HugeGraph-Studio 目录和tar包
- -p,必填项,指定要清理的 HugeGraph-Server 和 HugeGraph-Studio 的目录
- start-all,一键启动 HugeGraph-Server 和 HugeGraph-Studio,并启动监控,服务死掉时自动拉起服务
- -v,必填项,指明要启动的 HugeGraph-Server 和 HugeGraph-Studio 的版本号,最新的是 0.9
- -p,必填项,指定安装了 HugeGraph-Server 和 HugeGraph-Studio 的目录
- stop-all,一键关闭 HugeGraph-Server 和 HugeGraph-Studio
deploy命令中有可选参数 -u,提供时会使用指定的下载地址替代默认下载地址下载 tar 包,并且将地址写入~/hugegraph-download-url-prefix
文件中;之后如果不指定地址时,会优先从~/hugegraph-download-url-prefix
指定的地址下载 tar 包;如果 -u 和~/hugegraph-download-url-prefix
都没有时,会从默认下载地址进行下载
3.8 具体命令参数
各子命令的具体参数如下:
Usage: hugegraph [options] [command] [command options]
@@ -1016,7 +1018,7 @@
2.任务提交任务提交成功后,图区部分返回提交结果和任务ID
3.任务详情 提供【查看】入口,可跳转到任务详情查看当前任务具体执行情况跳转到任务中心后,直接显示当前执行的任务行 点击查看入口,跳转到任务管理列表,如下:
4.查看结果
- 结果通过json形式展示
3.5.4 OLAP算法任务
Hubble上暂未提供可视化的OLAP算法执行,可调用RESTful API进行OLAP类算法任务,在任务管理中通过ID找到相应任务,查看进度与结果等。
3.5.5 删除元数据、重建索引
1.创建任务
- 在元数据建模模块中,删除元数据时,可建立删除元数据的异步任务
- 在编辑已有的顶点/边类型操作中,新增索引时,可建立创建索引的异步任务
2.任务详情
- 确认/保存后,可跳转到任务中心查看当前任务的详情
3.5 - HugeGraph-Client Quick Start
1 HugeGraph-Client概述
HugeGraph-Client向HugeGraph-Server发出HTTP请求,获取并解析Server的执行结果。目前仅提供了Java版,用户可以使用HugeGraph-Client编写Java代码操作HugeGraph,比如元数据和图数据的增删改查,或者执行gremlin语句。
2 环境要求
- java 11 (兼容 java 8)
- maven 3.5+
3 使用流程
使用HugeGraph-Client的基本步骤如下:
- 新建Eclipse/ IDEA Maven项目;
- 在pom文件中添加HugeGraph-Client依赖;
- 创建类,调用HugeGraph-Client接口;
详细使用过程见下节完整示例。
4 完整示例
4.1 新建Maven工程
可以选择Eclipse或者Intellij Idea创建工程:
4.2 添加hugegraph-client依赖
添加hugegraph-client依赖
<dependencies>
<dependency>
- <groupId>com.baidu.hugegraph</groupId>
+ <groupId>org.apache.hugegraph</groupId>
<artifactId>hugegraph-client</artifactId>
<version>${version}</version>
</dependency>
@@ -1025,16 +1027,16 @@
import java.util.Iterator;
import java.util.List;
-import com.baidu.hugegraph.driver.GraphManager;
-import com.baidu.hugegraph.driver.GremlinManager;
-import com.baidu.hugegraph.driver.HugeClient;
-import com.baidu.hugegraph.driver.SchemaManager;
-import com.baidu.hugegraph.structure.constant.T;
-import com.baidu.hugegraph.structure.graph.Edge;
-import com.baidu.hugegraph.structure.graph.Path;
-import com.baidu.hugegraph.structure.graph.Vertex;
-import com.baidu.hugegraph.structure.gremlin.Result;
-import com.baidu.hugegraph.structure.gremlin.ResultSet;
+import org.apache.hugegraph.driver.GraphManager;
+import org.apache.hugegraph.driver.GremlinManager;
+import org.apache.hugegraph.driver.HugeClient;
+import org.apache.hugegraph.driver.SchemaManager;
+import org.apache.hugegraph.structure.constant.T;
+import org.apache.hugegraph.structure.graph.Edge;
+import org.apache.hugegraph.structure.graph.Path;
+import org.apache.hugegraph.structure.graph.Vertex;
+import org.apache.hugegraph.structure.gremlin.Result;
+import org.apache.hugegraph.structure.gremlin.ResultSet;
public class SingleExample {
@@ -1122,17 +1124,17 @@
.create();
GraphManager graph = hugeClient.graph();
- Vertex marko = graph.addVertex(T.label, "person", "name", "marko",
+ Vertex marko = graph.addVertex(T.LABEL, "person", "name", "marko",
"age", 29, "city", "Beijing");
- Vertex vadas = graph.addVertex(T.label, "person", "name", "vadas",
+ Vertex vadas = graph.addVertex(T.LABEL, "person", "name", "vadas",
"age", 27, "city", "Hongkong");
- Vertex lop = graph.addVertex(T.label, "software", "name", "lop",
+ Vertex lop = graph.addVertex(T.LABEL, "software", "name", "lop",
"lang", "java", "price", 328);
- Vertex josh = graph.addVertex(T.label, "person", "name", "josh",
+ Vertex josh = graph.addVertex(T.LABEL, "person", "name", "josh",
"age", 32, "city", "Beijing");
- Vertex ripple = graph.addVertex(T.label, "software", "name", "ripple",
+ Vertex ripple = graph.addVertex(T.LABEL, "software", "name", "ripple",
"lang", "java", "price", 199);
- Vertex peter = graph.addVertex(T.label, "person", "name", "peter",
+ Vertex peter = graph.addVertex(T.LABEL, "person", "name", "peter",
"age", 35, "city", "Shanghai");
marko.addEdge("knows", vadas, "date", "2016-01-10", "weight", 0.5);
@@ -1170,11 +1172,11 @@
4.3.2 BatchExample
import java.util.ArrayList;
import java.util.List;
-import com.baidu.hugegraph.driver.GraphManager;
-import com.baidu.hugegraph.driver.HugeClient;
-import com.baidu.hugegraph.driver.SchemaManager;
-import com.baidu.hugegraph.structure.graph.Edge;
-import com.baidu.hugegraph.structure.graph.Vertex;
+import org.apache.hugegraph.driver.GraphManager;
+import org.apache.hugegraph.driver.HugeClient;
+import org.apache.hugegraph.driver.SchemaManager;
+import org.apache.hugegraph.structure.graph.Edge;
+import org.apache.hugegraph.structure.graph.Vertex;
public class BatchExample {
@@ -1303,15 +1305,15 @@
hugeClient.close();
}
}
-
4.4 运行Example
运行Example之前需要启动Server, 启动过程见HugeGraph-Server Quick Start
4.5 Example示例说明
3.6 - HugeGraph-Computer Quick Start
1 HugeGraph-Computer 概述
HugeGraph-Computer
是分布式图处理系统 (OLAP). 它是 Pregel 的一个实现. 它可以运行在 Kubernetes 上。
特性
- 支持分布式MPP图计算,集成HugeGraph作为图输入输出存储。
- 算法基于BSP(Bulk Synchronous Parallel)模型,通过多次并行迭代进行计算,每一次迭代都是一次超步。
- 自动内存管理。该框架永远不会出现 OOM(内存不足),因为如果它没有足够的内存来容纳所有数据,它会将一些数据拆分到磁盘。
- 边的部分或超级节点的消息可以在内存中,所以你永远不会丢失它。
- 您可以从 HDFS 或 HugeGraph 或任何其他系统加载数据。
- 您可以将结果输出到 HDFS 或 HugeGraph,或任何其他系统。
- 易于开发新算法。您只需要像在单个服务器中一样专注于仅顶点处理,而不必担心消息传输和内存存储管理。
2 开始
2.1 在本地运行 PageRank 算法
要使用 HugeGraph-Computer 运行算法,您需要安装 64 位 Java 11 或更高版本。
还需要首先部署 HugeGraph-Server 和 Etcd.
有两种方式可以获取 HugeGraph-Computer:
- 下载已编译的压缩包
- 克隆源码编译打包
2.1 Download the compiled archive
下载最新版本的 HugeGraph-Computer release 包:
wget https://github.com/apache/hugegraph-computer/releases/download/v${version}/hugegraph-loader-${version}.tar.gz
+
4.4 运行Example
运行Example之前需要启动Server, 启动过程见HugeGraph-Server Quick Start
4.5 Example示例说明
3.6 - HugeGraph-Computer Quick Start
1 HugeGraph-Computer 概述
HugeGraph-Computer
是分布式图处理系统 (OLAP). 它是 Pregel 的一个实现. 它可以运行在 Kubernetes 上。
特性
- 支持分布式MPP图计算,集成HugeGraph作为图输入输出存储。
- 算法基于BSP(Bulk Synchronous Parallel)模型,通过多次并行迭代进行计算,每一次迭代都是一次超步。
- 自动内存管理。该框架永远不会出现 OOM(内存不足),因为如果它没有足够的内存来容纳所有数据,它会将一些数据拆分到磁盘。
- 边的部分或超级节点的消息可以在内存中,所以你永远不会丢失它。
- 您可以从 HDFS 或 HugeGraph 或任何其他系统加载数据。
- 您可以将结果输出到 HDFS 或 HugeGraph,或任何其他系统。
- 易于开发新算法。您只需要像在单个服务器中一样专注于仅顶点处理,而不必担心消息传输和内存存储管理。
2 开始
2.1 在本地运行 PageRank 算法
要使用 HugeGraph-Computer 运行算法,您需要安装 64 位 Java 11 或更高版本。
还需要首先部署 HugeGraph-Server 和 Etcd.
有两种方式可以获取 HugeGraph-Computer:
- 下载已编译的压缩包
- 克隆源码编译打包
2.1 Download the compiled archive
下载最新版本的 HugeGraph-Computer release 包:
wget https://github.com/apache/hugegraph-computer/releases/download/v${version}/hugegraph-loader-${version}.tar.gz
tar zxvf hugegraph-computer-${version}.tar.gz
2.2 Clone source code to compile and package
克隆最新版本的 HugeGraph-Computer 源码包:
$ git clone https://github.com/apache/hugegraph-computer.git
编译生成tar包:
cd hugegraph-computer
mvn clean package -DskipTests
2.3 启动 master 节点
您可以使用 -c
参数指定配置文件, 更多computer 配置请看: Computer Config Options
cd hugegraph-computer-${version}
bin/start-computer.sh -d local -r master
-
2.4 启动 worker 节点
bin/start-computer.sh -d local -r worker
-
2.5 查询算法结果
2.5.1 为 server 启用 OLAP
索引查询
如果没有启用OLAP索引,则需要启用, 更多参考: modify-graphs-read-mode
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
+
2.4 启动 worker 节点
bin/start-computer.sh -d local -r worker
+
2.5 查询算法结果
2.5.1 为 server 启用 OLAP
索引查询
如果没有启用OLAP索引,则需要启用, 更多参考: modify-graphs-read-mode
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
"ALL"
2.5.2 查询 page_rank
属性值:
curl "http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3" | gunzip
@@ -1454,117 +1456,117 @@
}
上面的配置项很多,但目前只需要关注如下几个配置项:channelizer 和 graphs。
- graphs:GremlinServer 启动时需要打开的图,该项是一个 map 结构,key 是图的名字,value 是该图的配置文件路径;
- channelizer:GremlinServer 与客户端有两种通信方式,分别是 WebSocket 和 HTTP(默认)。如果选择 WebSocket,
用户可以通过 Gremlin-Console 快速体验 HugeGraph 的特性,但是不支持大规模数据导入,
-推荐使用 HTTP 的通信方式,HugeGraph 的外围组件都是基于 HTTP 实现的;
默认GremlinServer是服务在 localhost:8182,如果需要修改,配置 host、port 即可
- host:部署 GremlinServer 机器的机器名或 IP,目前 HugeGraphServer 不支持分布式部署,且GremlinServer不直接暴露给用户;
- port:部署 GremlinServer 机器的端口;
同时需要在 rest-server.properties 中增加对应的配置项 gremlinserver.url=http://host:port
3 rest-server.properties
rest-server.properties 文件的默认内容如下:
# bind url
-restserver.url=http://127.0.0.1:8080
-# gremlin server url, need to be consistent with host and port in gremlin-server.yaml
-#gremlinserver.url=http://127.0.0.1:8182
-
-# graphs list with pair NAME:CONF_PATH
-graphs=[hugegraph:conf/hugegraph.properties]
-
-# authentication
-#auth.authenticator=
-#auth.admin_token=
-#auth.user_tokens=[]
-
-server.id=server-1
-server.role=master
-
- restserver.url:RestServer 提供服务的 url,根据实际环境修改;
- graphs:RestServer 启动时也需要打开图,该项为 map 结构,key 是图的名字,value 是该图的配置文件路径;
注意:gremlin-server.yaml 和 rest-server.properties 都包含 graphs 配置项,而 init-store
命令是根据 gremlin-server.yaml 的 graphs 下的图进行初始化的。
配置项 gremlinserver.url 是 GremlinServer 为 RestServer 提供服务的 url,该配置项默认为 http://localhost:8182,如需修改,需要和 gremlin-server.yaml 中的 host 和 port 相匹配;
4 hugegraph.properties
hugegraph.properties 是一类文件,因为如果系统存在多个图,则会有多个相似的文件。该文件用来配置与图存储和查询相关的参数,文件的默认内容如下:
# gremlin entrence to create graph
-gremlin.graph=com.baidu.hugegraph.HugeFactory
-
-# cache config
-#schema.cache_capacity=100000
-# vertex-cache default is 1000w, 10min expired
-#vertex.cache_capacity=10000000
-#vertex.cache_expire=600
-# edge-cache default is 100w, 10min expired
-#edge.cache_capacity=1000000
-#edge.cache_expire=600
-
-# schema illegal name template
-#schema.illegal_name_regex=\s+|~.*
-
-#vertex.default_label=vertex
-
-backend=rocksdb
-serializer=binary
-
-store=hugegraph
-
-raft.mode=false
-raft.safe_read=false
-raft.use_snapshot=false
-raft.endpoint=127.0.0.1:8281
-raft.group_peers=127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283
-raft.path=./raft-log
-raft.use_replicator_pipeline=true
-raft.election_timeout=10000
-raft.snapshot_interval=3600
-raft.backend_threads=48
-raft.read_index_threads=8
-raft.queue_size=16384
-raft.queue_publish_timeout=60
-raft.apply_batch=1
-raft.rpc_threads=80
-raft.rpc_connect_timeout=5000
-raft.rpc_timeout=60000
-
-# if use 'ikanalyzer', need download jar from 'https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory
-search.text_analyzer=jieba
-search.text_analyzer_mode=INDEX
-
-# rocksdb backend config
-#rocksdb.data_path=/path/to/disk
-#rocksdb.wal_path=/path/to/disk
-
-# cassandra backend config
-cassandra.host=localhost
-cassandra.port=9042
-cassandra.username=
-cassandra.password=
-#cassandra.connect_timeout=5
-#cassandra.read_timeout=20
-#cassandra.keyspace.strategy=SimpleStrategy
-#cassandra.keyspace.replication=3
-
-# hbase backend config
-#hbase.hosts=localhost
-#hbase.port=2181
-#hbase.znode_parent=/hbase
-#hbase.threads_max=64
-
-# mysql backend config
-#jdbc.driver=com.mysql.jdbc.Driver
-#jdbc.url=jdbc:mysql://127.0.0.1:3306
-#jdbc.username=root
-#jdbc.password=
-#jdbc.reconnect_max_times=3
-#jdbc.reconnect_interval=3
-#jdbc.sslmode=false
-
-# postgresql & cockroachdb backend config
-#jdbc.driver=org.postgresql.Driver
-#jdbc.url=jdbc:postgresql://localhost:5432/
-#jdbc.username=postgres
-#jdbc.password=
-
-# palo backend config
-#palo.host=127.0.0.1
-#palo.poll_interval=10
-#palo.temp_dir=./palo-data
-#palo.file_limit_size=32
-
重点关注未注释的几项:
- gremlin.graph:GremlinServer 的启动入口,用户不要修改此项;
- backend:使用的后端存储,可选值有 memory、cassandra、scylladb 和 rocksdb;
- serializer:主要为内部使用,用于将 schema、vertex 和 edge 序列化到后端,对应的可选值为 text、cassandra、scylladb 和 binary;(注:rocksdb后端值需是binary,其他后端backend与serializer值需保持一致,如hbase后端该值为hbase)
- store:图存储到后端使用的数据库名,在 cassandra 和 scylladb 中就是 keyspace 名,此项的值与 GremlinServer 和 RestServer 中的图名并无关系,但是出于直观考虑,建议仍然使用相同的名字;
- cassandra.host:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的 seeds;
- cassandra.port:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的 native port;
- rocksdb.data_path:backend 为 rocksdb 时此项才有意义,rocksdb 的数据目录
- rocksdb.wal_path:backend 为 rocksdb 时此项才有意义,rocksdb 的日志目录
- admin.token: 通过一个token来获取服务器的配置信息,例如:http://localhost:8080/graphs/hugegraph/conf?token=162f7848-0b6d-4faf-b557-3a0797869c55
5 多图配置
我们的系统是可以存在多个图的,并且各个图的后端可以不一样,比如图 hugegraph 和 hugegraph1,其中 hugegraph 以 cassandra 作为后端,hugegraph1 以 rocksdb作为后端。
配置方法也很简单:
修改 gremlin-server.yaml
在 gremlin-server.yaml 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs: {
+推荐使用 HTTP 的通信方式,HugeGraph 的外围组件都是基于 HTTP 实现的;默认GremlinServer是服务在 localhost:8182,如果需要修改,配置 host、port 即可
- host:部署 GremlinServer 机器的机器名或 IP,目前 HugeGraphServer 不支持分布式部署,且GremlinServer不直接暴露给用户;
- port:部署 GremlinServer 机器的端口;
同时需要在 rest-server.properties 中增加对应的配置项 gremlinserver.url=http://host:port
3 rest-server.properties
rest-server.properties 文件的默认内容如下:
# bind url
+restserver.url=http://127.0.0.1:8080
+# gremlin server url, need to be consistent with host and port in gremlin-server.yaml
+#gremlinserver.url=http://127.0.0.1:8182
+
+# graphs list with pair NAME:CONF_PATH
+graphs=[hugegraph:conf/hugegraph.properties]
+
+# authentication
+#auth.authenticator=
+#auth.admin_token=
+#auth.user_tokens=[]
+
+server.id=server-1
+server.role=master
+
- restserver.url:RestServer 提供服务的 url,根据实际环境修改;
- graphs:RestServer 启动时也需要打开图,该项为 map 结构,key 是图的名字,value 是该图的配置文件路径;
注意:gremlin-server.yaml 和 rest-server.properties 都包含 graphs 配置项,而 init-store
命令是根据 gremlin-server.yaml 的 graphs 下的图进行初始化的。
配置项 gremlinserver.url 是 GremlinServer 为 RestServer 提供服务的 url,该配置项默认为 http://localhost:8182,如需修改,需要和 gremlin-server.yaml 中的 host 和 port 相匹配;
4 hugegraph.properties
hugegraph.properties 是一类文件,因为如果系统存在多个图,则会有多个相似的文件。该文件用来配置与图存储和查询相关的参数,文件的默认内容如下:
# gremlin entrence to create graph
+gremlin.graph=com.baidu.hugegraph.HugeFactory
+
+# cache config
+#schema.cache_capacity=100000
+# vertex-cache default is 1000w, 10min expired
+#vertex.cache_capacity=10000000
+#vertex.cache_expire=600
+# edge-cache default is 100w, 10min expired
+#edge.cache_capacity=1000000
+#edge.cache_expire=600
+
+# schema illegal name template
+#schema.illegal_name_regex=\s+|~.*
+
+#vertex.default_label=vertex
+
+backend=rocksdb
+serializer=binary
+
+store=hugegraph
+
+raft.mode=false
+raft.safe_read=false
+raft.use_snapshot=false
+raft.endpoint=127.0.0.1:8281
+raft.group_peers=127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283
+raft.path=./raft-log
+raft.use_replicator_pipeline=true
+raft.election_timeout=10000
+raft.snapshot_interval=3600
+raft.backend_threads=48
+raft.read_index_threads=8
+raft.queue_size=16384
+raft.queue_publish_timeout=60
+raft.apply_batch=1
+raft.rpc_threads=80
+raft.rpc_connect_timeout=5000
+raft.rpc_timeout=60000
+
+# if use 'ikanalyzer', need download jar from 'https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory
+search.text_analyzer=jieba
+search.text_analyzer_mode=INDEX
+
+# rocksdb backend config
+#rocksdb.data_path=/path/to/disk
+#rocksdb.wal_path=/path/to/disk
+
+# cassandra backend config
+cassandra.host=localhost
+cassandra.port=9042
+cassandra.username=
+cassandra.password=
+#cassandra.connect_timeout=5
+#cassandra.read_timeout=20
+#cassandra.keyspace.strategy=SimpleStrategy
+#cassandra.keyspace.replication=3
+
+# hbase backend config
+#hbase.hosts=localhost
+#hbase.port=2181
+#hbase.znode_parent=/hbase
+#hbase.threads_max=64
+
+# mysql backend config
+#jdbc.driver=com.mysql.jdbc.Driver
+#jdbc.url=jdbc:mysql://127.0.0.1:3306
+#jdbc.username=root
+#jdbc.password=
+#jdbc.reconnect_max_times=3
+#jdbc.reconnect_interval=3
+#jdbc.sslmode=false
+
+# postgresql & cockroachdb backend config
+#jdbc.driver=org.postgresql.Driver
+#jdbc.url=jdbc:postgresql://localhost:5432/
+#jdbc.username=postgres
+#jdbc.password=
+
+# palo backend config
+#palo.host=127.0.0.1
+#palo.poll_interval=10
+#palo.temp_dir=./palo-data
+#palo.file_limit_size=32
+
重点关注未注释的几项:
- gremlin.graph:GremlinServer 的启动入口,用户不要修改此项;
- backend:使用的后端存储,可选值有 memory、cassandra、scylladb、mysql、hbase、postgresql 和 rocksdb;
- serializer:主要为内部使用,用于将 schema、vertex 和 edge 序列化到后端,对应的可选值为 text、cassandra、scylladb 和 binary;(注:rocksdb后端值需是binary,其他后端backend与serializer值需保持一致,如hbase后端该值为hbase)
- store:图存储到后端使用的数据库名,在 cassandra 和 scylladb 中就是 keyspace 名,此项的值与 GremlinServer 和 RestServer 中的图名并无关系,但是出于直观考虑,建议仍然使用相同的名字;
- cassandra.host:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的 seeds;
- cassandra.port:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的 native port;
- rocksdb.data_path:backend 为 rocksdb 时此项才有意义,rocksdb 的数据目录
- rocksdb.wal_path:backend 为 rocksdb 时此项才有意义,rocksdb 的日志目录
- admin.token: 通过一个token来获取服务器的配置信息,例如:http://localhost:8080/graphs/hugegraph/conf?token=162f7848-0b6d-4faf-b557-3a0797869c55
5 多图配置
我们的系统是可以存在多个图的,并且各个图的后端可以不一样,比如图 hugegraph 和 hugegraph1,其中 hugegraph 以 cassandra 作为后端,hugegraph1 以 rocksdb作为后端。
配置方法也很简单:
修改 gremlin-server.yaml
在 gremlin-server.yaml 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs: {
hugegraph: conf/hugegraph.properties,
hugegraph1: conf/hugegraph1.properties
}
-
修改 rest-server.properties
在 rest-server.properties 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs=[hugegraph:conf/hugegraph.properties, hugegraph1:conf/hugegraph1.properties]
-
添加 hugegraph1.properties
拷贝 hugegraph.properties,命名为 hugegraph1.properties,修改图对应的数据库名以及关于后端部分的参数,比如:
store=hugegraph1
-
-...
-
-backend=rocksdb
-serializer=binary
-
停止 Server,初始化执行 init-store.sh(为新的图创建数据库),重新启动 Server
$ bin/stop-hugegraph.sh
+
修改 rest-server.properties
在 rest-server.properties 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs=[hugegraph:conf/hugegraph.properties, hugegraph1:conf/hugegraph1.properties]
+
添加 hugegraph1.properties
拷贝 hugegraph.properties,命名为 hugegraph1.properties,修改图对应的数据库名以及关于后端部分的参数,比如:
store=hugegraph1
+
+...
+
+backend=rocksdb
+serializer=binary
+
停止 Server,初始化执行 init-store.sh(为新的图创建数据库),重新启动 Server
$ bin/stop-hugegraph.sh
$ bin/init-store.sh
$ bin/start-hugegraph.sh
4.2 - HugeGraph 配置项
Gremlin Server 配置项
对应配置文件gremlin-server.yaml
config option default value description host 127.0.0.1 The host or ip of Gremlin Server. port 8182 The listening port of Gremlin Server. graphs hugegraph: conf/hugegraph.properties The map of graphs with name and config file path. scriptEvaluationTimeout 30000 The timeout for gremlin script execution(millisecond). channelizer org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer Indicates the protocol which the Gremlin Server provides service. authentication authenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties} The authenticator and config(contains tokens path) of authentication mechanism.
Rest Server & API 配置项
对应配置文件rest-server.properties
config option default value description graphs [hugegraph:conf/hugegraph.properties] The map of graphs’ name and config file. server.id server-1 The id of rest server, used for license verification. server.role master The role of nodes in the cluster, available types are [master, worker, computer] restserver.url http://127.0.0.1:8080 The url for listening of rest server. ssl.keystore_file server.keystore The path of server keystore file used when https protocol is enabled. ssl.keystore_password The password of the path of the server keystore file used when the https protocol is enabled. restserver.max_worker_threads 2 * CPUs The maximum worker threads of rest server. restserver.min_free_memory 64 The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value. restserver.request_timeout 30 The time in seconds within which a request must complete, -1 means no timeout. restserver.connection_idle_timeout 30 The time in seconds to keep an inactive connection alive, -1 means no timeout. restserver.connection_max_requests 256 The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited. gremlinserver.url http://127.0.0.1:8182 The url of gremlin server. gremlinserver.max_route 8 The max route number for gremlin server. gremlinserver.timeout 30 The timeout in seconds of waiting for gremlin server. batch.max_edges_per_batch 500 The maximum number of edges submitted per batch. batch.max_vertices_per_batch 500 The maximum number of vertices submitted per batch. batch.max_write_ratio 50 The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0. batch.max_write_threads 0 The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads. auth.authenticator The class path of authenticator implementation. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator. auth.admin_token 162f7848-0b6d-4faf-b557-3a0797869c55 Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator. auth.graph_store hugegraph The name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator. auth.user_tokens [hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31] The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator. auth.audit_log_rate 1000.0 The max rate of audit log output per user, default value is 1000 records per second. auth.cache_capacity 10240 The max cache capacity of each auth cache item. auth.cache_expire 600 The expiration time in seconds of vertex cache. auth.remote_url If the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ‘,’. auth.token_expire 86400 The expiration time in seconds after token created auth.token_secret FXQXbJtbCLxODc6tGci732pkH1cyf8Qg Secret key of HS256 algorithm. exception.allow_trace false Whether to allow exception trace stack.
基本配置项
基本配置项及后端配置项对应配置文件:{graph-name}.properties,如hugegraph.properties
config option default value description gremlin.graph com.baidu.hugegraph.HugeFactory Gremlin entrance to create graph. backend rocksdb The data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql]. serializer binary The serializer for backend store, available values are [text, binary, cassandra, hbase, mysql]. store hugegraph The database name like Cassandra Keyspace. store.connection_detect_interval 600 The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time. store.graph g The graph table name, which store vertex, edge and property. store.schema m The schema table name, which store meta data. store.system s The system table name, which store system data. schema.illegal_name_regex .\s+$|~. The regex specified the illegal format for schema name. schema.cache_capacity 10000 The max cache size(items) of schema cache. vertex.cache_type l2 The type of vertex cache, allowed values are [l1, l2]. vertex.cache_capacity 10000000 The max cache size(items) of vertex cache. vertex.cache_expire 600 The expire time in seconds of vertex cache. vertex.check_customized_id_exist false Whether to check the vertices exist for those using customized id strategy. vertex.default_label vertex The default vertex label. vertex.tx_capacity 10000 The max size(items) of vertices(uncommitted) in transaction. vertex.check_adjacent_vertex_exist false Whether to check the adjacent vertices of edges exist. vertex.lazy_load_adjacent_vertex true Whether to lazy load adjacent vertices of edges. vertex.part_edge_commit_size 5000 Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled. vertex.encode_primary_key_number true Whether to encode number value of primary key in vertex id. vertex.remove_left_index_at_overwrite false Whether remove left index at overwrite. edge.cache_type l2 The type of edge cache, allowed values are [l1, l2]. edge.cache_capacity 1000000 The max cache size(items) of edge cache. edge.cache_expire 600 The expiration time in seconds of edge cache. edge.tx_capacity 10000 The max size(items) of edges(uncommitted) in transaction. query.page_size 500 The size of each page when querying by paging. query.batch_size 1000 The size of each batch when querying by batch. query.ignore_invalid_data true Whether to ignore invalid data of vertex or edge. query.index_intersect_threshold 1000 The maximum number of intermediate results to intersect indexes when querying by multiple single index properties. query.ramtable_edges_capacity 20000000 The maximum number of edges in ramtable, include OUT and IN edges. query.ramtable_enable false Whether to enable ramtable for query of adjacent edges. query.ramtable_vertices_capacity 10000000 The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity. query.optimize_aggregate_by_index false Whether to optimize aggregate query(like count) by index. oltp.concurrent_depth 10 The min depth to enable concurrent oltp algorithm. oltp.concurrent_threads 10 Thread number to concurrently execute oltp algorithm. oltp.collection_type EC The implementation type of collections used in oltp algorithm. rate_limit.read 0 The max rate(times/s) to execute query of vertices/edges. rate_limit.write 0 The max rate(items/s) to add/update/delete vertices/edges. task.wait_timeout 10 Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend. task.input_size_limit 16777216 The job input size limit in bytes. task.result_size_limit 16777216 The job result size limit in bytes. task.sync_deletion false Whether to delete schema or expired data synchronously. task.ttl_delete_batch 1 The batch size used to delete expired data. computer.config /conf/computer.yaml The config file path of computer job. search.text_analyzer ikanalyzer Choose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer]. # if use ‘ikanalyzer’, need download jar from ‘https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory search.text_analyzer_mode smart Specify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}. snowflake.datecenter_id 0 The datacenter id of snowflake id generator. snowflake.force_string false Whether to force the snowflake long id to be a string. snowflake.worker_id 0 The worker id of snowflake id generator. raft.mode false Whether the backend storage works in raft mode. raft.safe_read false Whether to use linearly consistent read. raft.use_snapshot false Whether to use snapshot. raft.endpoint 127.0.0.1:8281 The peerid of current raft node. raft.group_peers 127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283 The peers of current raft group. raft.path ./raft-log The log path of current raft node. raft.use_replicator_pipeline true Whether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn’t have to wait for the ack message of the current log to be sent. raft.election_timeout 10000 Timeout in milliseconds to launch a round of election. raft.snapshot_interval 3600 The interval in seconds to trigger snapshot save. raft.backend_threads current CPU v-cores The thread number used to apply task to backend. raft.read_index_threads 8 The thread number used to execute reading index. raft.apply_batch 1 The apply batch size to trigger disruptor event handler. raft.queue_size 16384 The disruptor buffers size for jraft RaftNode, StateMachine and LogManager. raft.queue_publish_timeout 60 The timeout in second when publish event into disruptor. raft.rpc_threads 80 The rpc threads for jraft RPC layer. raft.rpc_connect_timeout 5000 The rpc connect timeout for jraft rpc. raft.rpc_timeout 60000 The rpc timeout for jraft rpc. raft.rpc_buf_low_water_mark 10485760 The ChannelOutboundBuffer’s low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network. raft.rpc_buf_high_water_mark 20971520 The ChannelOutboundBuffer’s high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time. raft.read_strategy ReadOnlyLeaseBased The linearizability of read strategy.
RPC server 配置
config option default value description rpc.client_connect_timeout 20 The timeout(in seconds) of rpc client connect to rpc server. rpc.client_load_balancer consistentHash The rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is ‘consistentHash’, means forwarding by request parameters. rpc.client_read_timeout 40 The timeout(in seconds) of rpc client read from rpc server. rpc.client_reconnect_period 10 The period(in seconds) of rpc client reconnect to rpc server. rpc.client_retries 3 Failed retry number of rpc client calls to rpc server. rpc.config_order 999 Sofa rpc configuration file loading order, the larger the more later loading. rpc.logger_impl com.alipay.sofa.rpc.log.SLF4JLoggerImpl Sofa rpc log implementation class. rpc.protocol bolt Rpc communication protocol, client and server need to be specified the same value. rpc.remote_url The remote urls of rpc peers, it can be set to multiple addresses, which are concat by ‘,’, empty value means not enabled. rpc.server_adaptive_port false Whether the bound port is adaptive, if it’s enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts. rpc.server_host The hosts/ips bound by rpc server to provide services, empty value means not enabled. rpc.server_port 8090 The port bound by rpc server to provide services. rpc.server_timeout 30 The timeout(in seconds) of rpc server execution.
Cassandra 后端配置项
config option default value description backend Must be set to cassandra
. serializer Must be set to cassandra
. cassandra.host localhost The seeds hostname or ip address of cassandra cluster. cassandra.port 9042 The seeds port address of cassandra cluster. cassandra.connect_timeout 5 The cassandra driver connect server timeout(seconds). cassandra.read_timeout 20 The cassandra driver read from server timeout(seconds). cassandra.keyspace.strategy SimpleStrategy The replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy. cassandra.keyspace.replication [3] The keyspace replication factor of SimpleStrategy, like ‘[3]’.Or replicas in each datacenter of NetworkTopologyStrategy, like ‘[dc1:2,dc2:1]’. cassandra.username The username to use to login to cassandra cluster. cassandra.password The password corresponding to cassandra.username. cassandra.compression_type none The compression algorithm of cassandra transport: none/snappy/lz4. cassandra.jmx_port=7199 7199 The port of JMX API service for cassandra. cassandra.aggregation_timeout 43200 The timeout in seconds of waiting for aggregation.
ScyllaDB 后端配置项
config option default value description backend Must be set to scylladb
. serializer Must be set to scylladb
.
其它与 Cassandra 后端一致。
RocksDB 后端配置项
config option default value description backend Must be set to rocksdb
. serializer Must be set to binary
. rocksdb.data_disks [] The optimized disks for storing data of RocksDB. The format of each element: STORE/TABLE: /path/disk
.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap] rocksdb.data_path rocksdb-data The path for storing data of RocksDB. rocksdb.wal_path rocksdb-data The path for storing WAL of RocksDB. rocksdb.allow_mmap_reads false Allow the OS to mmap file for reading sst tables. rocksdb.allow_mmap_writes false Allow the OS to mmap file for writing. rocksdb.block_cache_capacity 8388608 The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache. rocksdb.bloom_filter_bits_per_key -1 The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter. rocksdb.bloom_filter_block_based_mode false Use block based filter rather than full filter. rocksdb.bloom_filter_whole_key_filtering true True if place whole keys in the bloom filter, else place the prefix of keys. rocksdb.bottommost_compression NO_COMPRESSION The compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd. rocksdb.bulkload_mode false Switch to the mode to bulk load data into RocksDB. rocksdb.cache_index_and_filter_blocks false Indicating if we’d put index/filter blocks to the block cache. rocksdb.compaction_style LEVEL Set compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO. rocksdb.compression SNAPPY_COMPRESSION The compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd. rocksdb.compression_per_level [NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION] The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd. rocksdb.delayed_write_rate 16777216 The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind. rocksdb.log_level INFO The info log level of RocksDB. rocksdb.max_background_jobs 8 Maximum number of concurrent background jobs, including flushes and compactions. rocksdb.level_compaction_dynamic_level_bytes false Whether to enable level_compaction_dynamic_level_bytes, if it’s enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it’s not recommended. rocksdb.max_bytes_for_level_base 536870912 The upper-bound of the total size of level-1 files in bytes. rocksdb.max_bytes_for_level_multiplier 10.0 The ratio between the total size of level (L+1) files and the total size of level L files for all L. rocksdb.max_open_files -1 The maximum number of open files that can be cached by RocksDB, -1 means no limit. rocksdb.max_subcompactions 4 The value represents the maximum number of threads per compaction job. rocksdb.max_write_buffer_number 6 The maximum number of write buffers that are built up in memory. rocksdb.max_write_buffer_number_to_maintain 0 The total maximum number of write buffers to maintain in memory. rocksdb.min_write_buffer_number_to_merge 2 The minimum number of write buffers that will be merged together. rocksdb.num_levels 7 Set the number of levels for this database. rocksdb.optimize_filters_for_hits false This flag allows us to not store filters for the last level. rocksdb.optimize_mode true Optimize for heavy workloads and big datasets. rocksdb.pin_l0_filter_and_index_blocks_in_cache false Indicating if we’d put index/filter blocks to the block cache. rocksdb.sst_path The path for ingesting SST file into RocksDB. rocksdb.target_file_size_base 67108864 The target file size for compaction in bytes. rocksdb.target_file_size_multiplier 1 The size ratio between a level L file and a level (L+1) file. rocksdb.use_direct_io_for_flush_and_compaction false Enable the OS to use direct read/writes in flush and compaction. rocksdb.use_direct_reads false Enable the OS to use direct I/O for reading sst tables. rocksdb.write_buffer_size 134217728 Amount of data in bytes to build up in memory. rocksdb.max_manifest_file_size 104857600 The max size of manifest file in bytes. rocksdb.skip_stats_update_on_db_open false Whether to skip statistics update when opening the database, setting this flag true allows us to not update statistics. rocksdb.max_file_opening_threads 16 The max number of threads used to open files. rocksdb.max_total_wal_size 0 Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit. rocksdb.db_write_buffer_size 0 Total size of write buffers in bytes across all column families, 0 means no limit. rocksdb.delete_obsolete_files_period 21600 The periodicity in seconds when obsolete files get deleted, 0 means always do full purge. rocksdb.hard_pending_compaction_bytes_limit 274877906944 The hard limit to impose on pending compaction in bytes. rocksdb.level0_file_num_compaction_trigger 2 Number of files to trigger level-0 compaction. rocksdb.level0_slowdown_writes_trigger 20 Soft limit on number of level-0 files for slowing down writes. rocksdb.level0_stop_writes_trigger 36 Hard limit on number of level-0 files for stopping writes. rocksdb.soft_pending_compaction_bytes_limit 68719476736 The soft limit to impose on pending compaction in bytes.
HBase 后端配置项
config option default value description backend Must be set to hbase
. serializer Must be set to hbase
. hbase.hosts localhost The hostnames or ip addresses of HBase zookeeper, separated with commas. hbase.port 2181 The port address of HBase zookeeper. hbase.threads_max 64 The max threads num of hbase connections. hbase.znode_parent /hbase The znode parent path of HBase zookeeper. hbase.zk_retry 3 The recovery retry times of HBase zookeeper. hbase.aggregation_timeout 43200 The timeout in seconds of waiting for aggregation. hbase.kerberos_enable false Is Kerberos authentication enabled for HBase. hbase.kerberos_keytab The HBase’s key tab file for kerberos authentication. hbase.kerberos_principal The HBase’s principal for kerberos authentication. hbase.krb5_conf etc/krb5.conf Kerberos configuration file, including KDC IP, default realm, etc. hbase.hbase_site /etc/hbase/conf/hbase-site.xml The HBase’s configuration file hbase.enable_partition true Is pre-split partitions enabled for HBase. hbase.vertex_partitions 10 The number of partitions of the HBase vertex table. hbase.edge_partitions 30 The number of partitions of the HBase edge table.
MySQL & PostgreSQL 后端配置项
config option default value description backend Must be set to mysql
. serializer Must be set to mysql
. jdbc.driver com.mysql.jdbc.Driver The JDBC driver class to connect database. jdbc.url jdbc:mysql://127.0.0.1:3306 The url of database in JDBC format. jdbc.username root The username to login database. jdbc.password ****** The password corresponding to jdbc.username. jdbc.ssl_mode false The SSL mode of connections with database. jdbc.reconnect_interval 3 The interval(seconds) between reconnections when the database connection fails. jdbc.reconnect_max_times 3 The reconnect times when the database connection fails. jdbc.storage_engine InnoDB The storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL. jdbc.postgresql.connect_database template1 The database used to connect when init store, drop store or check store exist.
PostgreSQL 后端配置项
config option default value description backend Must be set to postgresql
. serializer Must be set to postgresql
.
其它与 MySQL 后端一致。
PostgreSQL 后端的 driver 和 url 应该设置为:
jdbc.driver=org.postgresql.Driver
jdbc.url=jdbc:postgresql://localhost:5432/
4.3 - HugeGraph 内置用户权限与扩展权限配置及使用
概述
HugeGraph 为了方便不同用户场景下的鉴权使用,目前内置了两套权限模式:
- 简单的
ConfigAuthenticator
模式,通过本地配置文件存储用户名和密码 (仅支持单 GraphServer) - 完备的
StandardAuthenticator
模式,支持多用户认证、以及细粒度的权限访问控制,采用基于 “用户-用户组-操作-资源” 的 4 层设计,灵活控制用户角色与权限 (支持多 GraphServer)
其中 StandardAuthenticator
模式的几个核心设计:
- 初始化时创建超级管理员 (
admin
) 用户,后续通过超级管理员创建其它用户,新创建的用户被分配足够权限后,可以创建或管理更多的用户 - 支持动态创建用户、用户组、资源,支持动态分配或取消权限
- 用户可以属于一个或多个用户组,每个用户组可以拥有对任意个资源的操作权限,操作类型包括:读、写、删除、执行等种类
- “资源” 描述了图数据库中的数据,比如符合某一类条件的顶点,每一个资源包括
type
、label
、properties
三个要素,共有 18 种类型、任意 label、任意 properties 可组合形成的资源,一个资源的内部条件是且关系,多个资源之间的条件是或关系
举例说明:
// 场景:某用户只有北京地区的数据读取权限
@@ -1576,23 +1578,23 @@
authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: conf/rest-server.properties}
}
-
在配置文件rest-server.properties
中配置authenticator
及其graph_store
信息:
auth.authenticator=com.baidu.hugegraph.auth.StandardAuthenticator
-auth.graph_store=hugegraph
-
-# auth client config
-# 如果是分开部署 GraphServer 和 AuthServer, 还需要指定下面的配置, 地址填写 AuthServer 的 IP:RPC 端口
-#auth.remote_url=127.0.0.1:8899,127.0.0.1:8898,127.0.0.1:8897
-
其中,graph_store
配置项是指使用哪一个图来存储用户信息,如果存在多个图的话,选取任意一个均可。
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-
然后详细的权限 API 调用和说明请参考 Authentication-API 文档
ConfigAuthenticator模式
ConfigAuthenticator
模式是通过预先在配置文件中设置用户信息来支持用户认证,该实现是基于配置好的静态tokens
来验证用户是否合法。下面是具体的配置流程(重启服务生效):
在配置文件gremlin-server.yaml
中配置authenticator
及其rest-server
文件路径:
authentication: {
+
在配置文件rest-server.properties
中配置authenticator
及其graph_store
信息:
auth.authenticator=com.baidu.hugegraph.auth.StandardAuthenticator
+auth.graph_store=hugegraph
+
+# auth client config
+# 如果是分开部署 GraphServer 和 AuthServer, 还需要指定下面的配置, 地址填写 AuthServer 的 IP:RPC 端口
+#auth.remote_url=127.0.0.1:8899,127.0.0.1:8898,127.0.0.1:8897
+
其中,graph_store
配置项是指使用哪一个图来存储用户信息,如果存在多个图的话,选取任意一个均可。
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+
然后详细的权限 API 调用和说明请参考 Authentication-API 文档
ConfigAuthenticator模式
ConfigAuthenticator
模式是通过预先在配置文件中设置用户信息来支持用户认证,该实现是基于配置好的静态tokens
来验证用户是否合法。下面是具体的配置流程(重启服务生效):
在配置文件gremlin-server.yaml
中配置authenticator
及其rest-server
文件路径:
authentication: {
authenticator: com.baidu.hugegraph.auth.ConfigAuthenticator,
authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: conf/rest-server.properties}
}
-
在配置文件rest-server.properties
中配置authenticator
及其tokens
信息:
auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
-auth.admin_token=token-value-a
-auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
-
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-
自定义用户认证系统
如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口com.baidu.hugegraph.auth.HugeAuthenticator
即可,然后修改配置文件中authenticator
配置项指向该实现。
4.4 - 配置 HugeGraphServer 使用 https 协议
概述
HugeGraphServer 默认使用的是 http 协议,如果用户对请求的安全性有要求,可以配置成 https。
服务端配置
修改 conf/rest-server.properties 配置文件,将 restserver.url 的 schema 部分改为 https。
# 将协议设置为 https
+
在配置文件rest-server.properties
中配置authenticator
及其tokens
信息:
auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
+auth.admin_token=token-value-a
+auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
+
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+
自定义用户认证系统
如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口com.baidu.hugegraph.auth.HugeAuthenticator
即可,然后修改配置文件中authenticator
配置项指向该实现。
4.4 - 配置 HugeGraphServer 使用 https 协议
概述
HugeGraphServer 默认使用的是 http 协议,如果用户对请求的安全性有要求,可以配置成 https。
服务端配置
修改 conf/rest-server.properties 配置文件,将 restserver.url 的 schema 部分改为 https。
# 将协议设置为 https
restserver.url=https://127.0.0.1:8080
# 服务端 keystore 文件路径,当协议为 https 时该默认值自动生效,可按需修改此项
ssl.keystore_file=conf/hugegraph-server.keystore
@@ -1623,31 +1625,32 @@
# 执行迁移命令时,当 --target-url 中使用 https 协议时,默认值 hugegraph 自动生效,可按需修改
--target-trust-store-password {target-password}
hugegraph-tools 的 conf 目录下已经放了一个默认的客户端证书文件 hugegraph.truststore,其密码是 hugegraph。
如何生成证书文件
本部分给出生成证书的示例,如果默认的证书已经够用,或者已经知晓如何生成,可跳过。
服务端
- ⽣成服务端私钥,并且导⼊到服务端 keystore ⽂件中,server.keystore 是给服务端⽤的,其中保存着⾃⼰的私钥
keytool -genkey -alias serverkey -keyalg RSA -keystore server.keystore
-
过程中根据需求填写描述信息,默认证书的描述信息如下:
名字和姓⽒:hugegraph
-组织单位名称:hugegraph
-组织名称:hugegraph
-城市或区域名称:BJ
-州或省份名称:BJ
-国家代码:CN
-
- 根据服务端私钥,导出服务端证书
keytool -export -alias serverkey -keystore server.keystore -file server.crt
+
过程中根据需求填写描述信息,默认证书的描述信息如下:
名字和姓⽒:hugegraph
+组织单位名称:hugegraph
+组织名称:hugegraph
+城市或区域名称:BJ
+州或省份名称:BJ
+国家代码:CN
+
- 根据服务端私钥,导出服务端证书
keytool -export -alias serverkey -keystore server.keystore -file server.crt
server.crt 就是服务端的证书
客户端
keytool -import -alias serverkey -file server.crt -keystore client.truststore
client.truststore 是给客户端⽤的,其中保存着受信任的证书
4.5 - HugeGraph-Computer 配置
Computer Config Options
config option default value description algorithm.message_class org.apache.hugegraph.computer.core.config.Null The class of message passed when compute vertex. algorithm.params_class org.apache.hugegraph.computer.core.config.Null The class used to transfer algorithms’ parameters before algorithm been run. algorithm.result_class org.apache.hugegraph.computer.core.config.Null The class of vertex’s value, the instance is used to store computation result for the vertex. allocator.max_vertices_per_thread 10000 Maximum number of vertices per thread processed in each memory allocator bsp.etcd_endpoints http://localhost:2379 The end points to access etcd. bsp.log_interval 30000 The log interval(in ms) to print the log while waiting bsp event. bsp.max_super_step 10 The max super step of the algorithm. bsp.register_timeout 300000 The max timeout to wait for master and works to register. bsp.wait_master_timeout 86400000 The max timeout(in ms) to wait for master bsp event. bsp.wait_workers_timeout 86400000 The max timeout to wait for workers bsp event. hgkv.max_data_block_size 65536 The max byte size of hgkv-file data block. hgkv.max_file_size 2147483648 The max number of bytes in each hgkv-file. hgkv.max_merge_files 10 The max number of files to merge at one time. hgkv.temp_file_dir /tmp/hgkv This folder is used to store temporary files, temporary files will be generated during the file merging process. hugegraph.name hugegraph The graph name to load data and write results back. hugegraph.url http://127.0.0.1:8080 The hugegraph url to load data and write results back. input.edge_direction OUT The data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded. input.edge_freq MULTIPLE The frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it. input.filter_class org.apache.hugegraph.computer.core.input.filter.DefaultInputFilter The class to create input-filter object, input-filter is used to Filter vertex edges according to user needs. input.loader_schema_path The schema path of loader input, only takes effect when the input.source_type=loader is enabled input.loader_struct_path The struct path of loader input, only takes effect when the input.source_type=loader is enabled input.max_edges_in_one_vertex 200 The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit. input.source_type hugegraph-server The source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’. input.split_fetch_timeout 300 The timeout in seconds to fetch input splits input.split_max_splits 10000000 The maximum number of input splits input.split_page_size 500 The page size for streamed load input split data input.split_size 1048576 The input split size in bytes job.id local_0001 The job id on Yarn cluster or K8s cluster. job.partitions_count 1 The partitions count for computing one graph algorithm job. job.partitions_thread_nums 4 The number of threads for partition parallel compute. job.workers_count 1 The workers count for computing one graph algorithm job. master.computation_class org.apache.hugegraph.computer.core.master.DefaultMasterComputation Master-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master. output.batch_size 500 The batch size of output output.batch_threads 1 The threads number used to batch output output.hdfs_core_site_path The hdfs core site path. output.hdfs_delimiter , The delimiter of hdfs output. output.hdfs_kerberos_enable false Is Kerberos authentication enabled for Hdfs. output.hdfs_kerberos_keytab The Hdfs’s key tab file for kerberos authentication. output.hdfs_kerberos_principal The Hdfs’s principal for kerberos authentication. output.hdfs_krb5_conf /etc/krb5.conf Kerberos configuration file. output.hdfs_merge_partitions true Whether merge output files of multiple partitions. output.hdfs_path_prefix /hugegraph-computer/results The directory of hdfs output result. output.hdfs_replication 3 The replication number of hdfs. output.hdfs_site_path The hdfs site path. output.hdfs_url hdfs://127.0.0.1:9000 The hdfs url of output. output.hdfs_user hadoop The hdfs user of output. output.output_class org.apache.hugegraph.computer.core.output.LogOutput The class to output the computation result of each vertex. Be called after iteration computation. output.result_name value The value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS. output.result_write_type OLAP_COMMON The result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE]. output.retry_interval 10 The retry interval when output failed output.retry_times 3 The retry times when output failed output.single_threads 1 The threads number used to single output output.thread_pool_shutdown_timeout 60 The timeout seconds of output threads pool shutdown output.with_adjacent_edges false Output the adjacent edges of the vertex or not output.with_edge_properties false Output the properties of the edge or not output.with_vertex_properties false Output the properties of the vertex or not sort.thread_nums 4 The number of threads performing internal sorting. transport.client_connect_timeout 3000 The timeout(in ms) of client connect to server. transport.client_threads 4 The number of transport threads for client. transport.close_timeout 10000 The timeout(in ms) of close server or close client. transport.finish_session_timeout 0 The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests). transport.heartbeat_interval 20000 The minimum interval(in ms) between heartbeats on client side. transport.io_mode AUTO The network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically. transport.max_pending_requests 8 The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests. transport.max_syn_backlog 511 The capacity of SYN queue on server side, 0 means using system default value. transport.max_timeout_heartbeat_count 120 The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side. transport.min_ack_interval 200 The minimum interval(in ms) of server reply ack. transport.min_pending_requests 6 The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests. transport.network_retries 3 The number of retry attempts for network communication,if network unstable. transport.provider_class org.apache.hugegraph.computer.core.network.netty.NettyTransportProvider The transport provider, currently only supports Netty. transport.receive_buffer_size 0 The size of socket receive-buffer in bytes, 0 means using system default value. transport.recv_file_mode true Whether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable. transport.send_buffer_size 0 The size of socket send-buffer in bytes, 0 means using system default value. transport.server_host 127.0.0.1 The server hostname or ip to listen on to transfer data. transport.server_idle_timeout 360000 The max timeout(in ms) of server idle. transport.server_port 0 The server port to listen on to transfer data. The system will assign a random port if it’s set to 0. transport.server_threads 4 The number of transport threads for server. transport.sync_request_timeout 10000 The timeout(in ms) to wait response after sending sync-request. transport.tcp_keep_alive true Whether enable TCP keep-alive. transport.transport_epoll_lt false Whether enable EPOLL level-trigger. transport.write_buffer_high_mark 67108864 The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark. transport.write_buffer_low_mark 33554432 The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b transport.write_socket_timeout 3000 The timeout(in ms) to write data to socket buffer. valuefile.max_segment_size 1073741824 The max number of bytes in each segment of value-file. worker.combiner_class org.apache.hugegraph.computer.core.config.Null Combiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value. worker.computation_class org.apache.hugegraph.computer.core.config.Null The class to create worker-computation object, worker-computation is used to compute each vertex in each superstep. worker.data_dirs [jobs] The directories separated by ‘,’ that received vertices and messages can persist into. worker.edge_properties_combiner_class org.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombiner The combiner can combine several properties of the same edge into one properties at inputstep. worker.partitioner org.apache.hugegraph.computer.core.graph.partition.HashPartitioner The partitioner that decides which partition a vertex should be in, and which worker a partition should be in. worker.received_buffers_bytes_limit 104857600 The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file. worker.vertex_properties_combiner_class org.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombiner The combiner can combine several properties of the same vertex into one properties at inputstep. worker.wait_finish_messages_timeout 86400000 The max timeout(in ms) message-handler wait for finish-message of all workers. worker.wait_sort_timeout 600000 The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers. worker.write_buffer_capacity 52428800 The initial size of write buffer that used to store vertex or message. worker.write_buffer_threshold 52428800 The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.
K8s Operator Config Options
NOTE: Option needs to be converted through environment variable settings, e.g. k8s.internal_etcd_url => INTERNAL_ETCD_URL
config option default value description k8s.auto_destroy_pod true Whether to automatically destroy all pods when the job is completed or failed. k8s.close_reconciler_timeout 120 The max timeout(in ms) to close reconciler. k8s.internal_etcd_url http://127.0.0.1:2379 The internal etcd url for operator system. k8s.max_reconcile_retry 3 The max retry times of reconcile. k8s.probe_backlog 50 The maximum backlog for serving health probes. k8s.probe_port 9892 The value is the port that the controller bind to for serving health probes. k8s.ready_check_internal 1000 The time interval(ms) of check ready. k8s.ready_timeout 30000 The max timeout(in ms) of check ready. k8s.reconciler_count 10 The max number of reconciler thread. k8s.resync_period 600000 The minimum frequency at which watched resources are reconciled. k8s.timezone Asia/Shanghai The timezone of computer job and operator. k8s.watch_namespace hugegraph-computer-system The value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.
HugeGraph-Computer CRD
spec default value description required algorithmName The name of algorithm. true jobId The job id. true image The image of algorithm. true computerConf The map of computer config options. true workerInstances The number of worker instances, it will instead the ‘job.workers_count’ option. true pullPolicy Always The pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy false pullSecrets The pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod false masterCpu The cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu false workerCpu The cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu false masterMemory The memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory false workerMemory The memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory false log4jXml The content of log4j.xml for computer job. false jarFile The jar path of computer algorithm. false remoteJarUri The remote jar uri of computer algorithm, it will overlay algorithm image. false jvmOptions The java startup parameters of computer job. false envVars please refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/ false envFrom please refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/ false masterCommand bin/start-computer.sh The run command of master, equivalent to ‘Entrypoint’ field of Docker. false masterArgs ["-r master", “-d k8s”] The run args of master, equivalent to ‘Cmd’ field of Docker. false workerCommand bin/start-computer.sh The run command of worker, equivalent to ‘Entrypoint’ field of Docker. false workerArgs ["-r worker", “-d k8s”] The run args of worker, equivalent to ‘Cmd’ field of Docker. false volumes Please refer to: https://kubernetes.io/docs/concepts/storage/volumes/ false volumeMounts Please refer to: https://kubernetes.io/docs/concepts/storage/volumes/ false secretPaths The map of k8s-secret name and mount path. false configMapPaths The map of k8s-configmap name and mount path. false podTemplateSpec Please refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpec false securityContext Please refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ false
KubeDriver Config Options
config option default value description k8s.build_image_bash_path The path of command used to build image. k8s.enable_internal_algorithm true Whether enable internal algorithm. k8s.framework_image_url hugegraph/hugegraph-computer:latest The image url of computer framework. k8s.image_repository_password The password for login image repository. k8s.image_repository_registry The address for login image repository. k8s.image_repository_url hugegraph/hugegraph-computer The url of image repository. k8s.image_repository_username The username for login image repository. k8s.internal_algorithm [pageRank] The name list of all internal algorithm. k8s.internal_algorithm_image_url hugegraph/hugegraph-computer:latest The image url of internal algorithm. k8s.jar_file_dir /cache/jars/ The directory where the algorithm jar to upload location. k8s.kube_config ~/.kube/config The path of k8s config file. k8s.log4j_xml_path The log4j.xml path for computer job. k8s.namespace hugegraph-computer-system The namespace of hugegraph-computer system. k8s.pull_secret_names [] The names of pull-secret for pulling image.
5 - API
5.1 - HugeGraph RESTful API
HugeGraph-Server通过HugeGraph-API基于HTTP协议为Client提供操作图的接口,主要包括元数据和
-图数据的增删改查,遍历算法,变量,图操作及其他操作。
5.1.1 - Schema API
1.1 Schema
HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema
-
Response Status
200
+图数据的增删改查,遍历算法,变量,图操作及其他操作。
5.1.1 - Schema API
1.1 Schema
HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。
Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
+
+e.g: GET http://localhost:8080/graphs/hugegraph/schema
+
Response Status
200
Response Body
{
"propertykeys": [
{
"id": 7,
"name": "price",
- "data_type": "INT",
+ "data_type": "DOUBLE",
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.741"
+ "~create_time": "2023-05-08 17:49:05.316"
}
},
{
@@ -1657,11 +1660,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.729"
+ "~create_time": "2023-05-08 17:49:05.309"
}
},
{
@@ -1671,11 +1673,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.691"
+ "~create_time": "2023-05-08 17:49:05.287"
}
},
{
@@ -1685,11 +1686,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.678"
+ "~create_time": "2023-05-08 17:49:05.280"
}
},
{
@@ -1699,11 +1699,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.718"
+ "~create_time": "2023-05-08 17:49:05.301"
}
},
{
@@ -1713,11 +1712,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.707"
+ "~create_time": "2023-05-08 17:49:05.294"
}
},
{
@@ -1727,11 +1725,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.609"
+ "~create_time": "2023-05-08 17:49:05.250"
}
}
],
@@ -1744,9 +1741,11 @@
"name"
],
"nullable_keys": [
- "age"
+ "age",
+ "city"
],
"index_labels": [
+ "personByAge",
"personByCity",
"personByAgeAndCity"
],
@@ -1759,19 +1758,15 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:40.783"
+ "~create_time": "2023-05-08 17:49:05.336"
}
},
{
"id": 2,
"name": "software",
- "id_strategy": "PRIMARY_KEY",
- "primary_keys": [
- "name"
- ],
- "nullable_keys": [
- "price"
- ],
+ "id_strategy": "CUSTOMIZE_NUMBER",
+ "primary_keys": [],
+ "nullable_keys": [],
"index_labels": [
"softwareByPrice"
],
@@ -1784,7 +1779,7 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:40.840"
+ "~create_time": "2023-05-08 17:49:05.347"
}
}
],
@@ -1794,13 +1789,9 @@
"name": "knows",
"source_label": "person",
"target_label": "person",
- "frequency": "MULTIPLE",
- "sort_keys": [
- "date"
- ],
- "nullable_keys": [
- "weight"
- ],
+ "frequency": "SINGLE",
+ "sort_keys": [],
+ "nullable_keys": [],
"index_labels": [
"knowsByWeight"
],
@@ -1812,7 +1803,7 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:41.840"
+ "~create_time": "2023-05-08 17:49:08.437"
}
},
{
@@ -1821,11 +1812,8 @@
"source_label": "person",
"target_label": "software",
"frequency": "SINGLE",
- "sort_keys": [
- ],
- "nullable_keys": [
- "weight"
- ],
+ "sort_keys": [],
+ "nullable_keys": [],
"index_labels": [
"createdByDate",
"createdByWeight"
@@ -1838,13 +1826,27 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:41.868"
+ "~create_time": "2023-05-08 17:49:08.446"
}
}
],
"indexlabels": [
{
"id": 1,
+ "name": "personByAge",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "person",
+ "index_type": "RANGE_INT",
+ "fields": [
+ "age"
+ ],
+ "status": "CREATED",
+ "user_data": {
+ "~create_time": "2023-05-08 17:49:05.375"
+ }
+ },
+ {
+ "id": 2,
"name": "personByCity",
"base_type": "VERTEX_LABEL",
"base_value": "person",
@@ -1854,68 +1856,68 @@
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.886"
+ "~create_time": "2023-05-08 17:49:06.898"
}
},
{
- "id": 4,
- "name": "createdByDate",
- "base_type": "EDGE_LABEL",
- "base_value": "created",
+ "id": 3,
+ "name": "personByAgeAndCity",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "person",
"index_type": "SECONDARY",
"fields": [
- "date"
+ "age",
+ "city"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.878"
+ "~create_time": "2023-05-08 17:49:07.407"
}
},
{
- "id": 5,
- "name": "createdByWeight",
- "base_type": "EDGE_LABEL",
- "base_value": "created",
+ "id": 4,
+ "name": "softwareByPrice",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "software",
"index_type": "RANGE_DOUBLE",
"fields": [
- "weight"
+ "price"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:42.117"
+ "~create_time": "2023-05-08 17:49:07.916"
}
},
{
- "id": 2,
- "name": "personByAgeAndCity",
- "base_type": "VERTEX_LABEL",
- "base_value": "person",
+ "id": 5,
+ "name": "createdByDate",
+ "base_type": "EDGE_LABEL",
+ "base_value": "created",
"index_type": "SECONDARY",
"fields": [
- "age",
- "city"
+ "date"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.351"
+ "~create_time": "2023-05-08 17:49:08.454"
}
},
{
- "id": 3,
- "name": "softwareByPrice",
- "base_type": "VERTEX_LABEL",
- "base_value": "software",
- "index_type": "RANGE_INT",
+ "id": 6,
+ "name": "createdByWeight",
+ "base_type": "EDGE_LABEL",
+ "base_value": "created",
+ "index_type": "RANGE_DOUBLE",
"fields": [
- "price"
+ "weight"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.587"
+ "~create_time": "2023-05-08 17:49:08.963"
}
},
{
- "id": 6,
+ "id": 7,
"name": "knowsByWeight",
"base_type": "EDGE_LABEL",
"base_value": "knows",
@@ -1925,13 +1927,13 @@
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:42.376"
+ "~create_time": "2023-05-08 17:49:09.473"
}
}
]
}
-
5.1.2 - PropertyKey API
1.2 PropertyKey
Params说明:
- name:属性类型名称,必填
- data_type:属性类型数据类型,包括:bool、byte、int、long、float、double、string、date、uuid、blob,默认string类型
- cardinality:属性类型基数,包括:single、list、set,默认single
请求体字段说明:
- id:属性类型id值
- properties:属性的属性,对于属性而言,此项为空
- user_data:设置属性类型的通用信息,比如可设置age属性的取值范围,最小为0,最大为100;目前此项不做任何校验,只为后期拓展提供预留入口
1.2.1 创建一个 PropertyKey
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/propertykeys
-
Request Body
{
+
5.1.2 - PropertyKey API
1.2 PropertyKey
Params说明:
- name:属性类型名称,必填
- data_type:属性类型数据类型,包括:bool、byte、int、long、float、double、string、date、uuid、blob,默认string类型
- cardinality:属性类型基数,包括:single、list、set,默认single
请求体字段说明:
- id:属性类型id值
- properties:属性的属性,对于属性而言,此项为空
- user_data:设置属性类型的通用信息,比如可设置age属性的取值范围,最小为0,最大为100;目前此项不做任何校验,只为后期拓展提供预留入口
1.2.1 创建一个 PropertyKey
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/propertykeys
+
Request Body
{
"name": "age",
"data_type": "INT",
"cardinality": "SINGLE"
@@ -1953,8 +1955,8 @@
},
"task_id": 0
}
-
1.2.2 为已存在的 PropertyKey 添加或移除 userdata
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/propertykeys/age?action=append
-
Request Body
{
+
1.2.2 为已存在的 PropertyKey 添加或移除 userdata
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/propertykeys/age?action=append
+
Request Body
{
"name": "age",
"user_data": {
"min": 0,
@@ -1980,8 +1982,8 @@
},
"task_id": 0
}
-
1.2.3 获取所有的 PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys
-
Response Status
200
+
1.2.3 获取所有的 PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys
+
Response Status
200
Response Body
{
"propertykeys": [
{
@@ -2042,8 +2044,8 @@
}
]
}
-
1.2.4 根据name获取PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
-
其中,age
为要获取的PropertyKey的名字
Response Status
200
+
1.2.4 根据name获取PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
+
其中,age
为要获取的 PropertyKey 的名称
Response Status
200
Response Body
{
"id": 1,
"name": "age",
@@ -2059,13 +2061,13 @@
"~create_time": "2022-05-13 13:47:23.745"
}
}
-
1.2.5 根据name删除PropertyKey
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
-
其中,age
为要获取的PropertyKey的名字
Response Status
202
+
1.2.5 根据 name 删除 PropertyKey
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
+
其中,age
为要删除的 PropertyKey 的名称
Response Status
202
Response Body
{
"task_id" : 0
}
-
5.1.3 - VertexLabel API
1.3 VertexLabel
假设已经创建好了1.1.3中列出来的 PropertyKeys
Params说明
- id:顶点类型id值
- name:顶点类型名称,必填
- id_strategy: 顶点类型的ID策略,主键ID、自动生成、自定义字符串、自定义数字、自定义UUID,默认主键ID
- properties: 顶点类型关联的属性类型
- primary_keys: 主键属性,当ID策略为PRIMARY_KEY时必须有值,其他ID策略时必须为空;
- enable_label_index: 是否开启类型索引,默认关闭
- index_names:顶点类型创建的索引,详情见3.4
- nullable_keys:可为空的属性
- user_data:设置顶点类型的通用信息,作用同属性类型
1.3.1 创建一个VertexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/vertexlabels
-
Request Body
{
+
5.1.3 - VertexLabel API
1.3 VertexLabel
假设已经创建好了1.1.3中列出来的 PropertyKeys
Params说明
- id:顶点类型id值
- name:顶点类型名称,必填
- id_strategy: 顶点类型的ID策略,主键ID、自动生成、自定义字符串、自定义数字、自定义UUID,默认主键ID
- properties: 顶点类型关联的属性类型
- primary_keys: 主键属性,当ID策略为PRIMARY_KEY时必须有值,其他ID策略时必须为空;
- enable_label_index: 是否开启类型索引,默认关闭
- index_names:顶点类型创建的索引,详情见3.4
- nullable_keys:可为空的属性
- user_data:设置顶点类型的通用信息,作用同属性类型
1.3.1 创建一个VertexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+
Request Body
{
"name": "person",
"id_strategy": "DEFAULT",
"properties": [
@@ -2127,8 +2129,8 @@
"ttl_start_time": "createdTime",
"enable_label_index": true
}
-
1.3.2 为已存在的VertexLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person?action=append
-
Request Body
{
+
1.3.2 为已存在的VertexLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person?action=append
+
Request Body
{
"name": "person",
"properties": [
"city"
@@ -2161,8 +2163,8 @@
"super": "animal"
}
}
-
1.3.3 获取所有的VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
-
Response Status
200
+
1.3.3 获取所有的VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+
Response Status
200
Response Body
{
"vertexlabels": [
{
@@ -2209,8 +2211,8 @@
}
]
}
-
1.3.4 根据name获取VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
-
Response Status
200
+
1.3.4 根据name获取VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
+
Response Status
200
Response Body
{
"id": 1,
"primary_keys": [
@@ -2233,13 +2235,13 @@
"super": "animal"
}
}
-
1.3.5 根据name删除VertexLabel
删除 VertexLabel 会导致删除对应的顶点以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
-
Response Status
202
+
1.3.5 根据name删除VertexLabel
删除 VertexLabel 会导致删除对应的顶点以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
5.1.4 - EdgeLabel API
1.4 EdgeLabel
假设已经创建好了1.2.3中的 PropertyKeys 和 1.3.3中的 VertexLabels
Params说明
- name:顶点类型名称,必填
- source_label: 源顶点类型的名称,必填
- target_label: 目标顶点类型的名称,必填
- frequency:两个点之间是否可以有多条边,可以取值SINGLE和MULTIPLE,非必填,默认值SINGLE
- properties: 边类型关联的属性类型,选填
- sort_keys: 当允许关联多次时,指定区分键属性列表
- nullable_keys:可为空的属性,选填,默认可为空
- enable_label_index: 是否开启类型索引,默认关闭
1.4.1 创建一个EdgeLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/edgelabels
-
Request Body
{
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
5.1.4 - EdgeLabel API
1.4 EdgeLabel
假设已经创建好了1.2.3中的 PropertyKeys 和 1.3.3中的 VertexLabels
Params说明
- name:顶点类型名称,必填
- source_label: 源顶点类型的名称,必填
- target_label: 目标顶点类型的名称,必填
- frequency:两个点之间是否可以有多条边,可以取值SINGLE和MULTIPLE,非必填,默认值SINGLE
- properties: 边类型关联的属性类型,选填
- sort_keys: 当允许关联多次时,指定区分键属性列表
- nullable_keys:可为空的属性,选填,默认可为空
- enable_label_index: 是否开启类型索引,默认关闭
1.4.1 创建一个EdgeLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/edgelabels
+
Request Body
{
"name": "created",
"source_label": "person",
"target_label": "software",
@@ -2311,8 +2313,8 @@
"ttl_start_time": "createdTime",
"user_data": {}
}
-
1.4.2 为已存在的EdgeLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/edgelabels/created?action=append
-
Request Body
{
+
1.4.2 为已存在的EdgeLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/edgelabels/created?action=append
+
Request Body
{
"name": "created",
"properties": [
"weight"
@@ -2342,8 +2344,8 @@
"enable_label_index": true,
"user_data": {}
}
-
1.4.3 获取所有的EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels
-
Response Status
200
+
1.4.3 获取所有的EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels
+
Response Status
200
Response Body
{
"edgelabels": [
{
@@ -2387,8 +2389,8 @@
}
]
}
-
1.4.4 根据name获取EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
-
Response Status
200
+
1.4.4 根据name获取EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
+
Response Status
200
Response Body
{
"id": 1,
"sort_keys": [
@@ -2411,13 +2413,13 @@
"enable_label_index": true,
"user_data": {}
}
-
1.4.5 根据name删除EdgeLabel
删除 EdgeLabel 会导致删除对应的边以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
-
Response Status
202
+
1.4.5 根据name删除EdgeLabel
删除 EdgeLabel 会导致删除对应的边以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
5.1.5 - IndexLabel API
1.5 IndexLabel
假设已经创建好了1.1.3中的 PropertyKeys 、1.2.3中的 VertexLabels 以及 1.3.3中的 EdgeLabels
1.5.1 创建一个IndexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/indexlabels
-
Request Body
{
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
5.1.5 - IndexLabel API
1.5 IndexLabel
假设已经创建好了1.1.3中的 PropertyKeys 、1.2.3中的 VertexLabels 以及 1.3.3中的 EdgeLabels
1.5.1 创建一个IndexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/indexlabels
+
Request Body
{
"name": "personByCity",
"base_type": "VERTEX_LABEL",
"base_value": "person",
@@ -2440,8 +2442,8 @@
},
"task_id": 2
}
-
1.5.2 获取所有的IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels
-
Response Status
200
+
1.5.2 获取所有的IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels
+
Response Status
200
Response Body
{
"indexlabels": [
{
@@ -2487,8 +2489,8 @@
}
]
}
-
1.5.3 根据name获取IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
-
Response Status
200
+
1.5.3 根据name获取IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
+
Response Status
200
Response Body
{
"id": 1,
"base_type": "VERTEX_LABEL",
@@ -2499,28 +2501,28 @@
],
"index_type": "SECONDARY"
}
-
1.5.4 根据name删除IndexLabel
删除 IndexLabel 会导致删除相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
-
Response Status
202
+
1.5.4 根据name删除IndexLabel
删除 IndexLabel 会导致删除相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
5.1.6 - Rebuild API
1.6 Rebuild
1.6.1 重建IndexLabel
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/indexlabels/personByCity
-
Response Status
202
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
5.1.6 - Rebuild API
1.6 Rebuild
1.6.1 重建IndexLabel
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/indexlabels/personByCity
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.2 VertexLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/vertexlabels/person
-
Response Status
202
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.2 VertexLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/vertexlabels/person
+
Response Status
202
Response Body
{
"task_id": 2
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2
(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.3 EdgeLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/edgelabels/created
-
Response Status
202
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2
(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.3 EdgeLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/edgelabels/created
+
Response Status
202
Response Body
{
"task_id": 3
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/3
(其中"3"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
5.1.7 - Vertex API
2.1 Vertex
顶点类型中的 Id 策略决定了顶点的 Id 类型,其对应关系如下:
Id_Strategy id type AUTOMATIC number PRIMARY_KEY string CUSTOMIZE_STRING string CUSTOMIZE_NUMBER number CUSTOMIZE_UUID uuid
顶点的 GET/PUT/DELETE
API 中 url 的 id 部分传入的应是带有类型信息的 id 值,这个类型信息用 json 串是否带引号表示,也就是说:
- 当 id 类型为 number 时,url 中的 id 不带引号,形如 xxx/vertices/123456
- 当 id 类型为 string 时,url 中的 id 带引号,形如 xxx/vertices/“123456”
接下来的示例均假设已经创建好了前述的各种 schema 信息
2.1.1 创建一个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices
-
Request Body
{
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/3
(其中"3"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
5.1.7 - Vertex API
2.1 Vertex
顶点类型中的 Id 策略决定了顶点的 Id 类型,其对应关系如下:
Id_Strategy id type AUTOMATIC number PRIMARY_KEY string CUSTOMIZE_STRING string CUSTOMIZE_NUMBER number CUSTOMIZE_UUID uuid
顶点的 GET/PUT/DELETE
API 中 url 的 id 部分传入的应是带有类型信息的 id 值,这个类型信息用 json 串是否带引号表示,也就是说:
- 当 id 类型为 number 时,url 中的 id 不带引号,形如 xxx/vertices/123456
- 当 id 类型为 string 时,url 中的 id 带引号,形如 xxx/vertices/“123456”
接下来的示例均假设已经创建好了前述的各种 schema 信息
2.1.1 创建一个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices
+
Request Body
{
"label": "person",
"properties": {
"name": "marko",
@@ -2547,8 +2549,8 @@
]
}
}
-
2.1.2 创建多个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices/batch
-
Request Body
[
+
2.1.2 创建多个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices/batch
+
Request Body
[
{
"label": "person",
"properties": {
@@ -2570,8 +2572,8 @@
"1:marko",
"2:ripple"
]
-
2.1.3 更新顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=append
-
Request Body
{
+
2.1.3 更新顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=append
+
Request Body
{
"label": "person",
"properties": {
"age": 30,
@@ -2633,8 +2635,8 @@
}
]
}
-
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/batch
-
Request Body
{
+
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/batch
+
Request Body
{
"vertices":[
{
"label":"software",
@@ -2698,8 +2700,8 @@
}
]
}
-
结果分析:
- lang 属性未指定更新策略,直接用新值覆盖旧值,无论新值是否为null;
- price 属性指定 BIGGER 的更新策略,旧属性值为328,新属性值为299,所以仍然保留了旧属性值328;
- age 属性指定 OVERRIDE 更新策略,而新属性值中未传入age,相当于age为null,所以仍然保留了原属性值32;
- city 属性也指定了 OVERRIDE 更新策略,且新属性值不为null,所以覆盖了旧值;
- weight 属性指定了 SUM 更新策略,旧属性值为0.1,新属性值为0.2,最后的值为0.3;
- hobby 属性(基数为Set)指定了 UNION 更新策略,所以新值与旧值取了并集;
其他的更新策略使用方式可以类推,不再赘述。
2.1.5 删除顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=eliminate
-
Request Body
{
+
结果分析:
- lang 属性未指定更新策略,直接用新值覆盖旧值,无论新值是否为null;
- price 属性指定 BIGGER 的更新策略,旧属性值为328,新属性值为299,所以仍然保留了旧属性值328;
- age 属性指定 OVERRIDE 更新策略,而新属性值中未传入age,相当于age为null,所以仍然保留了原属性值32;
- city 属性也指定了 OVERRIDE 更新策略,且新属性值不为null,所以覆盖了旧值;
- weight 属性指定了 SUM 更新策略,旧属性值为0.1,新属性值为0.2,最后的值为0.3;
- hobby 属性(基数为Set)指定了 UNION 更新策略,所以新值与旧值取了并集;
其他的更新策略使用方式可以类推,不再赘述。
2.1.5 删除顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=eliminate
+
Request Body
{
"label": "person",
"properties": {
"city": "Beijing"
@@ -2725,8 +2727,8 @@
]
}
}
-
2.1.6 获取符合条件的顶点
Params
- label: 顶点类型
- properties: 属性键值对(根据属性查询的前提是预先建立了索引)
- limit: 查询最大数目
- page: 页号
以上参数都是可选的,如果提供page参数,必须提供limit参数,不允许带其他参数。label, properties
和limit
可以任意组合。
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"age":29}
,范围匹配时形如properties={"age":"P.gt(29)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的顶点 P.neq(number) 属性值不等于number的顶点 P.lt(number) 属性值小于number的顶点 P.lte(number) 属性值小于等于number的顶点 P.gt(number) 属性值大于number的顶点 P.gte(number) 属性值大于等于number的顶点 P.between(number1,number2) 属性值大于等于number1且小于number2的顶点 P.inside(number1,number2) 属性值大于number1且小于number2的顶点 P.outside(number1,number2) 属性值小于number1且大于number2的顶点 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的顶点
查询所有 age 为 20 且 label 为 person 的顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?label=person&properties={"age":29}&limit=1
-
Response Status
200
+
2.1.6 获取符合条件的顶点
Params
- label: 顶点类型
- properties: 属性键值对(根据属性查询的前提是预先建立了索引)
- limit: 查询最大数目
- page: 页号
以上参数都是可选的,如果提供page参数,必须提供limit参数,不允许带其他参数。label, properties
和limit
可以任意组合。
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"age":29}
,范围匹配时形如properties={"age":"P.gt(29)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的顶点 P.neq(number) 属性值不等于number的顶点 P.lt(number) 属性值小于number的顶点 P.lte(number) 属性值小于等于number的顶点 P.gt(number) 属性值大于number的顶点 P.gte(number) 属性值大于等于number的顶点 P.between(number1,number2) 属性值大于等于number1且小于number2的顶点 P.inside(number1,number2) 属性值大于number1且小于number2的顶点 P.outside(number1,number2) 属性值小于number1且大于number2的顶点 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的顶点
查询所有 age 为 20 且 label 为 person 的顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?label=person&properties={"age":29}&limit=1
+
Response Status
200
Response Body
{
"vertices": [
{
@@ -2756,8 +2758,8 @@
}
]
}
-
分页查询所有顶点,获取第一页(page不带参数值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3
-
Response Status
200
+
分页查询所有顶点,获取第一页(page不带参数值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3
+
Response Status
200
Response Body
{
"vertices": [{
"id": "2:ripple",
@@ -2820,8 +2822,8 @@
"page": "001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004"
}
返回的body里面是带有下一页的页号信息的,"page": "001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004"
,
-在查询下一页的时候将该值赋给page参数。
分页查询所有顶点,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page=001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004&limit=3
-
Response Status
200
+在查询下一页的时候将该值赋给page参数。分页查询所有顶点,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page=001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004&limit=3
+
Response Status
200
Response Body
{
"vertices": [{
"id": "1:josh",
@@ -2883,8 +2885,8 @@
],
"page": null
}
-
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.1.7 根据Id获取顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
-
Response Status
200
+
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.1.7 根据Id获取顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
+
Response Status
200
Response Body
{
"id": "1:marko",
"label": "person",
@@ -2904,13 +2906,13 @@
]
}
}
-
2.1.8 根据Id删除顶点
Params
- label: 顶点类型,可选参数
仅根据Id删除顶点
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
-
Response Status
204
-
根据Label+Id删除顶点
通过指定Label参数和Id来删除顶点时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
-
Response Status
204
+
2.1.8 根据Id删除顶点
Params
- label: 顶点类型,可选参数
仅根据Id删除顶点
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
+
Response Status
204
+
根据Label+Id删除顶点
通过指定Label参数和Id来删除顶点时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
+
Response Status
204
5.1.8 - Edge API
2.2 Edge
顶点 id 格式的修改也影响到了边的 Id 以及源顶点和目标顶点 id 的格式。
EdgeId是由 src-vertex-id + direction + label + sort-values + tgt-vertex-id
拼接而成,
-但是这里的顶点id类型不是通过引号区分的,而是根据前缀区分:
- 当 id 类型为 number 时,EdgeId 的顶点 id 前有一个前缀
L
,形如 “L123456>1»L987654” - 当 id 类型为 string 时,EdgeId 的顶点 id 前有一个前缀
S
,形如 “S1:peter>1»S2:lop”
接下来的示例均假设已经创建好了前述的各种schema和vertex信息
2.2.1 创建一条边
Params说明
- label:边类型名称,必填
- outV:源顶点id,必填
- inV:目标顶点id,必填
- outVLabel:源顶点类型。必填
- inVLabel:目标顶点类型。必填
- properties: 边关联的属性,对象内部结构为:
- name:属性名称
- value:属性值
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges
-
Request Body
{
+但是这里的顶点id类型不是通过引号区分的,而是根据前缀区分:- 当 id 类型为 number 时,EdgeId 的顶点 id 前有一个前缀
L
,形如 “L123456>1»L987654” - 当 id 类型为 string 时,EdgeId 的顶点 id 前有一个前缀
S
,形如 “S1:peter>1»S2:lop”
接下来的示例均假设已经创建好了前述的各种schema和vertex信息
2.2.1 创建一条边
Params说明
- label:边类型名称,必填
- outV:源顶点id,必填
- inV:目标顶点id,必填
- outVLabel:源顶点类型。必填
- inVLabel:目标顶点类型。必填
- properties: 边关联的属性,对象内部结构为:
- name:属性名称
- value:属性值
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges
+
Request Body
{
"label": "created",
"outV": "1:peter",
"inV": "2:lop",
@@ -2935,8 +2937,8 @@
"weight": 0.2
}
}
-
2.2.2 创建多条边
Params
- check_vertex: 是否检查顶点存在(true | false),当设置为 true 而待插入边的源顶点或目标顶点不存在时会报错。
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges/batch
-
Request Body
[
+
2.2.2 创建多条边
Params
- check_vertex: 是否检查顶点存在(true | false),当设置为 true 而待插入边的源顶点或目标顶点不存在时会报错。
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges/batch
+
Request Body
[
{
"label": "created",
"outV": "1:peter",
@@ -2965,8 +2967,8 @@
"S1:peter>1>>S2:lop",
"S1:marko>2>>S1:vadas"
]
-
2.2.3 更新边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=append
-
Request Body
{
+
2.2.3 更新边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=append
+
Request Body
{
"properties": {
"weight": 1.0
}
@@ -3015,8 +3017,8 @@
}
]
}
-
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/edges/batch
-
Request Body
{
+
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/edges/batch
+
Request Body
{
"edges":[
{
"id":"S1:josh>2>>S2:ripple",
@@ -3081,8 +3083,8 @@
}
]
}
-
2.2.5 删除边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=eliminate
-
Request Body
{
+
2.2.5 删除边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=eliminate
+
Request Body
{
"properties": {
"weight": 1.0
}
@@ -3101,8 +3103,8 @@
}
}
2.2.6 获取符合条件的边
Params
- vertex_id: 顶点id
- direction: 边的方向(OUT | IN | BOTH)
- label: 边的标签
- properties: 属性键值对(根据属性查询的前提是预先建立了索引)
- offset:偏移,默认为0
- limit: 查询数目,默认为100
- page: 页号
支持的查询有以下几种:
- 提供vertex_id参数时,不可以使用参数page,direction、label、properties可选,offset和limit可以
-限制结果范围
- 不提供vertex_id参数时,label和properties可选
- 如果使用page参数,则:offset参数不可用(不填或者为0),direction不可用,properties最多只能有一个
- 如果不使用page参数,则:offset和limit可以用来限制结果范围,direction参数忽略
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"weight":0.8}
,范围匹配时形如properties={"age":"P.gt(0.8)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的边 P.neq(number) 属性值不等于number的边 P.lt(number) 属性值小于number的边 P.lte(number) 属性值小于等于number的边 P.gt(number) 属性值大于number的边 P.gte(number) 属性值大于等于number的边 P.between(number1,number2) 属性值大于等于number1且小于number2的边 P.inside(number1,number2) 属性值大于number1且小于number2的边 P.outside(number1,number2) 属性值小于number1且大于number2的边 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的边
查询与顶点 person:josh(vertex_id=“1:josh”) 相连且 label 为 created 的边
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?vertex_id="1:josh"&direction=BOTH&label=created&properties={}
-
Response Status
200
+限制结果范围不提供vertex_id参数时,label和properties可选- 如果使用page参数,则:offset参数不可用(不填或者为0),direction不可用,properties最多只能有一个
- 如果不使用page参数,则:offset和limit可以用来限制结果范围,direction参数忽略
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"weight":0.8}
,范围匹配时形如properties={"age":"P.gt(0.8)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的边 P.neq(number) 属性值不等于number的边 P.lt(number) 属性值小于number的边 P.lte(number) 属性值小于等于number的边 P.gt(number) 属性值大于number的边 P.gte(number) 属性值大于等于number的边 P.between(number1,number2) 属性值大于等于number1且小于number2的边 P.inside(number1,number2) 属性值大于number1且小于number2的边 P.outside(number1,number2) 属性值小于number1且大于number2的边 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的边
查询与顶点 person:josh(vertex_id=“1:josh”) 相连且 label 为 created 的边
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?vertex_id="1:josh"&direction=BOTH&label=created&properties={}
+
Response Status
200
Response Body
{
"edges": [
{
@@ -3133,8 +3135,8 @@
}
]
}
-
分页查询所有边,获取第一页(page不带参数值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page&limit=3
-
Response Status
200
+
分页查询所有边,获取第一页(page不带参数值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page&limit=3
+
Response Status
200
Response Body
{
"edges": [{
"id": "S1:peter>2>>S2:lop",
@@ -3179,8 +3181,8 @@
"page": "002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004"
}
返回的body里面是带有下一页的页号信息的,"page": "002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004"
,
-在查询下一页的时候将该值赋给page参数。
分页查询所有边,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page=002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004&limit=3
-
Response Status
200
+在查询下一页的时候将该值赋给page参数。分页查询所有边,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page=002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004&limit=3
+
Response Status
200
Response Body
{
"edges": [{
"id": "S1:marko>1>20130220>S1:josh",
@@ -3224,8 +3226,8 @@
],
"page": null
}
-
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.2.7 根据Id获取边
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
-
Response Status
200
+
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.2.7 根据Id获取边
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
+
Response Status
200
Response Body
{
"id": "S1:peter>1>>S2:lop",
"label": "created",
@@ -3239,10 +3241,10 @@
"weight": 0.2
}
}
-
2.2.8 根据Id删除边
Params
- label: 边类型,可选参数
仅根据Id删除边
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
-
Response Status
204
-
根据Label+Id删除边
通过指定Label参数和Id来删除边时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
-
Response Status
204
+
2.2.8 根据Id删除边
Params
- label: 边类型,可选参数
仅根据Id删除边
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
+
Response Status
204
+
根据Label+Id删除边
通过指定Label参数和Id来删除边时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
+
Response Status
204
5.1.9 - Traverser API
3.1 traverser API概述
HugeGraphServer为HugeGraph图数据库提供了RESTful API接口。除了顶点和边的CRUD基本操作以外,还提供了一些遍历(traverser)方法,我们称为traverser API
。这些遍历方法实现了一些复杂的图算法,方便用户对图进行分析和挖掘。
HugeGraph支持的Traverser API包括:
- K-out API,根据起始顶点,查找恰好N步可达的邻居,分为基础版和高级版:
- 基础版使用GET方法,根据起始顶点,查找恰好N步可达的邻居
- 高级版使用POST方法,根据起始顶点,查找恰好N步可达的邻居,与基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
- K-neighbor API,根据起始顶点,查找N步以内可达的所有邻居,分为基础版和高级版:
- 基础版使用GET方法,根据起始顶点,查找N步以内可达的所有邻居
- 高级版使用POST方法,根据起始顶点,查找N步以内可达的所有邻居,与基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
- Same Neighbors, 查询两个顶点的共同邻居
- Jaccard Similarity API,计算jaccard相似度,包括两种:
- 一种是使用GET方法,计算两个顶点的邻居的相似度(交并比)
- 一种是使用POST方法,在全图中查找与起点的jaccard similarity最高的N个点
- Shortest Path API,查找两个顶点之间的最短路径
- All Shortest Paths,查找两个顶点间的全部最短路径
- Weighted Shortest Path,查找起点到目标点的带权最短路径
- Single Source Shortest Path,查找一个点到其他各个点的加权最短路径
- Multi Node Shortest Path,查找指定顶点集之间两两最短路径
- Paths API,查找两个顶点间的全部路径,分为基础版和高级版:
- 基础版使用GET方法,根据起点和终点,查找两个顶点间的全部路径
- 高级版使用POST方法,根据一组起点和一组终点,查找两个集合间符合条件的全部路径
- Customized Paths API,从一批顶点出发,按(一种)模式遍历经过的全部路径
- Template Path API,指定起点和终点以及起点和终点间路径信息,查找符合的路径
- Crosspoints API,查找两个顶点的交点(共同祖先或者共同子孙)
- Customized Crosspoints API,从一批顶点出发,按多种模式遍历,最后一步到达的顶点的交点
- Rings API,从起始顶点出发,可到达的环路路径
- Rays API,从起始顶点出发,可到达边界的路径(即无环路径)
- Fusiform Similarity API,查找一个顶点的梭形相似点
- Vertices API
- 按ID批量查询顶点;
- 获取顶点的分区;
- 按分区查询顶点;
- Edges API
- 按ID批量查询边;
- 获取边的分区;
- 按分区查询边;
3.2. traverser API详解
使用方法中的例子,都是基于TinkerPop官网给出的图:
数据导入程序如下:
public class Loader {
public static void main(String[] args) {
HugeClient client = new HugeClient("http://127.0.0.1:8080", "hugegraph");
@@ -3350,28 +3352,28 @@
peter.addEdge("created", lop, "date", "20170324", "weight", 0.2);
}
}
-
顶点ID为:
"2:ripple",
-"1:vadas",
-"1:peter",
-"1:josh",
-"1:marko",
-"2:lop"
-
边ID为:
"S1:peter>2>>S2:lop",
-"S1:josh>2>>S2:lop",
-"S1:josh>2>>S2:ripple",
-"S1:marko>1>20130220>S1:josh",
-"S1:marko>1>20160110>S1:vadas",
-"S1:marko>2>>S2:lop"
-
3.2.1 K-out API(GET,基础版)
3.2.1.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找从起始顶点出发恰好depth步可达的顶点
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.1.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kout?source="1:marko"&max_depth=2
-
Response Status
200
+
顶点ID为:
"2:ripple",
+"1:vadas",
+"1:peter",
+"1:josh",
+"1:marko",
+"2:lop"
+
边ID为:
"S1:peter>2>>S2:lop",
+"S1:josh>2>>S2:lop",
+"S1:josh>2>>S2:ripple",
+"S1:marko>1>20130220>S1:josh",
+"S1:marko>1>20160110>S1:vadas",
+"S1:marko>2>>S2:lop"
+
3.2.1 K-out API(GET,基础版)
3.2.1.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找从起始顶点出发恰好depth步可达的顶点
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.1.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kout?source="1:marko"&max_depth=2
+
Response Status
200
Response Body
{
"vertices":[
"2:ripple",
"1:peter"
]
}
-
3.2.1.3 适用场景
查找恰好N步关系可达的顶点。两个例子:
- 家族关系中,查找一个人的所有孙子,person A通过连续的两条“儿子”边到达的顶点集合。
- 社交关系中发现潜在好友,例如:与目标用户相隔两层朋友关系的用户,可以通过连续两条“朋友”边到达的顶点。
3.2.2 K-out API(POST,高级版)
3.2.2.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发恰好depth步可达的顶点。
与K-out基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kout
-
Request Body
{
+
3.2.1.3 适用场景
查找恰好N步关系可达的顶点。两个例子:
- 家族关系中,查找一个人的所有孙子,person A通过连续的两条“儿子”边到达的顶点集合。
- 社交关系中发现潜在好友,例如:与目标用户相隔两层朋友关系的用户,可以通过连续两条“朋友”边到达的顶点。
3.2.2 K-out API(POST,高级版)
3.2.2.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发恰好depth步可达的顶点。
与K-out基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kout
+
Request Body
{
"source": "1:marko",
"step": {
"direction": "BOTH",
@@ -3459,8 +3461,8 @@
}
]
}
-
3.2.2.3 适用场景
参见3.2.1.3
3.2.3 K-neighbor(GET,基础版)
3.2.3.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找包括起始顶点在内、depth步之内可达的所有顶点
相当于:起始顶点、K-out(1)、K-out(2)、… 、K-out(max_depth)的并集
Params
- source: 起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的顶点的最大数目,也即遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.3.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kneighbor?source=“1:marko”&max_depth=2
-
Response Status
200
+
3.2.2.3 适用场景
参见3.2.1.3
3.2.3 K-neighbor(GET,基础版)
3.2.3.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找包括起始顶点在内、depth步之内可达的所有顶点
相当于:起始顶点、K-out(1)、K-out(2)、… 、K-out(max_depth)的并集
Params
- source: 起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的顶点的最大数目,也即遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.3.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kneighbor?source=“1:marko”&max_depth=2
+
Response Status
200
Response Body
{
"vertices":[
"2:ripple",
@@ -3471,8 +3473,8 @@
"2:lop"
]
}
-
3.2.3.3 适用场景
查找N步以内可达的所有顶点,例如:
- 家族关系中,查找一个人五服以内所有子孙,person A通过连续的5条“亲子”边到达的顶点集合。
- 社交关系中发现好友圈子,例如目标用户通过1条、2条、3条“朋友”边可到达的用户可以组成目标用户的朋友圈子
3.2.4 K-neighbor API(POST,高级版)
3.2.4.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发depth步内可达的所有顶点。
与K-neighbor基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.4.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kneighbor
-
Request Body
{
+
3.2.3.3 适用场景
查找N步以内可达的所有顶点,例如:
- 家族关系中,查找一个人五服以内所有子孙,person A通过连续的5条“亲子”边到达的顶点集合。
- 社交关系中发现好友圈子,例如目标用户通过1条、2条、3条“朋友”边可到达的用户可以组成目标用户的朋友圈子
3.2.4 K-neighbor API(POST,高级版)
3.2.4.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发depth步内可达的所有顶点。
与K-neighbor基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.4.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kneighbor
+
Request Body
{
"source": "1:marko",
"step": {
"direction": "BOTH",
@@ -3601,20 +3603,20 @@
}
]
}
-
3.2.4.3 适用场景
参见3.2.3.3
3.2.5 Same Neighbors
3.2.5.1 功能介绍
查询两个点的共同邻居
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的共同邻居的最大数目,选填项,默认为10000000
3.2.5.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/sameneighbors?vertex=“1:marko”&other="1:josh"
-
Response Status
200
+
3.2.4.3 适用场景
参见3.2.3.3
3.2.5 Same Neighbors
3.2.5.1 功能介绍
查询两个点的共同邻居
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的共同邻居的最大数目,选填项,默认为10000000
3.2.5.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/sameneighbors?vertex=“1:marko”&other="1:josh"
+
Response Status
200
Response Body
{
"same_neighbors":[
"2:lop"
]
}
-
3.2.5.3 适用场景
查找两个顶点的共同邻居:
- 社交关系中发现两个用户的共同粉丝或者共同关注用户
3.2.6 Jaccard Similarity(GET)
3.2.6.1 功能介绍
计算两个顶点的jaccard similarity(两个顶点邻居的交集比上两个顶点邻居的并集)
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
3.2.6.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity?vertex="1:marko"&other="1:josh"
-
Response Status
200
+
3.2.5.3 适用场景
查找两个顶点的共同邻居:
- 社交关系中发现两个用户的共同粉丝或者共同关注用户
3.2.6 Jaccard Similarity(GET)
3.2.6.1 功能介绍
计算两个顶点的jaccard similarity(两个顶点邻居的交集比上两个顶点邻居的并集)
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
3.2.6.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity?vertex="1:marko"&other="1:josh"
+
Response Status
200
Response Body
{
"jaccard_similarity": 0.2
}
-
3.2.6.3 适用场景
用于评估两个点的相似性或者紧密度
3.2.7 Jaccard Similarity(POST)
3.2.7.1 功能介绍
计算与指定顶点的jaccard similarity最大的N个点
jaccard similarity的计算方式为:两个顶点邻居的交集比上两个顶点邻居的并集
Params
- vertex:一个顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- top:返回一个起点的jaccard similarity中最大的top个,选填项,默认为100
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.7.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity
-
Request Body
{
+
3.2.6.3 适用场景
用于评估两个点的相似性或者紧密度
3.2.7 Jaccard Similarity(POST)
3.2.7.1 功能介绍
计算与指定顶点的jaccard similarity最大的N个点
jaccard similarity的计算方式为:两个顶点邻居的交集比上两个顶点邻居的并集
Params
- vertex:一个顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- top:返回一个起点的jaccard similarity中最大的top个,选填项,默认为100
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.7.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity
+
Request Body
{
"vertex": "1:marko",
"step": {
"direction": "BOTH",
@@ -3630,8 +3632,8 @@
"1:peter": 0.3333333333333333,
"1:josh": 0.2
}
-
3.2.7.3 适用场景
用于在图中找出与指定顶点相似性最高的顶点
3.2.8 Shortest Path
3.2.8.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.8.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/shortestpath?source="1:marko"&target="2:ripple"&max_depth=3
-
Response Status
200
+
3.2.7.3 适用场景
用于在图中找出与指定顶点相似性最高的顶点
3.2.8 Shortest Path
3.2.8.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.8.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/shortestpath?source="1:marko"&target="2:ripple"&max_depth=3
+
Response Status
200
Response Body
{
"path":[
"1:marko",
@@ -3639,8 +3641,8 @@
"2:ripple"
]
}
-
3.2.8.3 适用场景
查找两个顶点间的最短路径,例如:
- 社交关系网中,查找两个用户有关系的最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备最短的关联关系
3.2.9 All Shortest Paths
3.2.9.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找两点间所有的最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.9.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/allshortestpaths?source="A"&target="Z"&max_depth=10
-
Response Status
200
+
3.2.8.3 适用场景
查找两个顶点间的最短路径,例如:
- 社交关系网中,查找两个用户有关系的最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备最短的关联关系
3.2.9 All Shortest Paths
3.2.9.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找两点间所有的最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.9.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/allshortestpaths?source="A"&target="Z"&max_depth=10
+
Response Status
200
Response Body
{
"paths":[
{
@@ -3661,8 +3663,8 @@
}
]
}
-
3.2.9.3 适用场景
查找两个顶点间的所有最短路径,例如:
- 社交关系网中,查找两个用户有关系的全部最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备全部的最短关联关系
3.2.10 Weighted Shortest Path
3.2.10.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条带权最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,必填项,必须是数字类型的属性
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.10.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/weightedshortestpath?source="1:marko"&target="2:ripple"&weight="weight"&with_vertex=true
-
Response Status
200
+
3.2.9.3 适用场景
查找两个顶点间的所有最短路径,例如:
- 社交关系网中,查找两个用户有关系的全部最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备全部的最短关联关系
3.2.10 Weighted Shortest Path
3.2.10.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条带权最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,必填项,必须是数字类型的属性
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.10.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/weightedshortestpath?source="1:marko"&target="2:ripple"&weight="weight"&with_vertex=true
+
Response Status
200
Response Body
{
"path": {
"weight": 2.0,
@@ -3705,8 +3707,8 @@
}
]
}
-
3.2.10.3 适用场景
查找两个顶点间的带权最短路径,例如:
- 交通线路中查找从A城市到B城市花钱最少的交通方式
3.2.11 Single Source Shortest Path
3.2.11.1 功能介绍
从一个顶点出发,查找该点到图中其他顶点的最短路径(可选是否带权重)
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,选填项,必须是数字类型的属性,如果不填或者虽然填了但是边没有该属性,则权重为1.0
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:查询到的目标顶点个数,也是返回的最短路径的条数,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.11.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/singlesourceshortestpath?source="1:marko"&with_vertex=true
-
Response Status
200
+
3.2.10.3 适用场景
查找两个顶点间的带权最短路径,例如:
- 交通线路中查找从A城市到B城市花钱最少的交通方式
3.2.11 Single Source Shortest Path
3.2.11.1 功能介绍
从一个顶点出发,查找该点到图中其他顶点的最短路径(可选是否带权重)
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,选填项,必须是数字类型的属性,如果不填或者虽然填了但是边没有该属性,则权重为1.0
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:查询到的目标顶点个数,也是返回的最短路径的条数,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.11.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/singlesourceshortestpath?source="1:marko"&with_vertex=true
+
Response Status
200
Response Body
{
"paths": {
"2:ripple": {
@@ -3810,8 +3812,8 @@
}
]
}
-
3.2.11.3 适用场景
查找从一个点出发到其他顶点的带权最短路径,比如:
- 查找从北京出发到全国其他所有城市的耗时最短的乘车方案
3.2.12 Multi Node Shortest Path
3.2.12.1 功能介绍
查找指定顶点集两两之间的最短路径
Params
- vertices:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.12.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/multinodeshortestpath
-
Request Body
{
+
3.2.11.3 适用场景
查找从一个点出发到其他顶点的带权最短路径,比如:
- 查找从北京出发到全国其他所有城市的耗时最短的乘车方案
3.2.12 Multi Node Shortest Path
3.2.12.1 功能介绍
查找指定顶点集两两之间的最短路径
Params
- vertices:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.12.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/multinodeshortestpath
+
Request Body
{
"vertices": {
"ids": ["382:marko", "382:josh", "382:vadas", "382:peter", "383:lop", "383:ripple"]
},
@@ -3993,8 +3995,8 @@
}
]
}
-
3.2.12.3 适用场景
查找多个点之间的最短路径,比如:
- 查找多个公司和法人之间的最短路径
3.2.13 Paths (GET,基础版)
3.2.13.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找所有路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
3.2.13.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/paths?source="1:marko"&target="1:josh"&max_depth=5
-
Response Status
200
+
3.2.12.3 适用场景
查找多个点之间的最短路径,比如:
- 查找多个公司和法人之间的最短路径
3.2.13 Paths (GET,基础版)
3.2.13.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找所有路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
3.2.13.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/paths?source="1:marko"&target="1:josh"&max_depth=5
+
Response Status
200
Response Body
{
"paths":[
{
@@ -4012,8 +4014,8 @@
}
]
}
-
3.2.13.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.14 Paths (POST,高级版)
3.2.14.1 功能介绍
根据起始顶点、目的顶点、步骤(step)和最大深度等条件查找所有路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.14.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/paths
-
Request Body
{
+
3.2.13.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.14 Paths (POST,高级版)
3.2.14.1 功能介绍
根据起始顶点、目的顶点、步骤(step)和最大深度等条件查找所有路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.14.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/paths
+
Request Body
{
"sources": {
"ids": ["1:marko"]
},
@@ -4051,8 +4053,8 @@
}
]
}
-
3.2.14.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.15 Customized Paths
3.2.15.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- weight_by:根据指定的属性计算边的权重,sort_by不为NONE时有效,与default_weight互斥
- default_weight:当边没有属性作为权重计算值时,采取的默认权重,sort_by不为NONE时有效,与weight_by互斥
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- sample:当需要对某个step的符合条件的边进行采样时设置,-1表示不采样,默认为采样100
- sort_by:根据路径的权重排序,选填项,默认为NONE:
- NONE表示不排序,默认值
- INCR表示按照路径权重的升序排序
- DECR表示按照路径权重的降序排序
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.15.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedpaths
-
Request Body
{
+
3.2.14.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.15 Customized Paths
3.2.15.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- weight_by:根据指定的属性计算边的权重,sort_by不为NONE时有效,与default_weight互斥
- default_weight:当边没有属性作为权重计算值时,采取的默认权重,sort_by不为NONE时有效,与weight_by互斥
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- sample:当需要对某个step的符合条件的边进行采样时设置,-1表示不采样,默认为采样100
- sort_by:根据路径的权重排序,选填项,默认为NONE:
- NONE表示不排序,默认值
- INCR表示按照路径权重的升序排序
- DECR表示按照路径权重的降序排序
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.15.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedpaths
+
Request Body
{
"sources":{
"ids":[
@@ -4179,8 +4181,8 @@
}
]
}
-
3.2.15.3 适用场景
适合查找各种复杂的路径集合,例如:
- 社交网络中,查找看过张艺谋所导演的电影的用户关注的大V的路径(张艺谋—>电影—->用户—>大V)
- 风控网络中,查找多个高风险用户的直系亲属的朋友的路径(高风险用户—>直系亲属—>朋友)
3.2.16 Template Paths
3.2.16.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_times:当前step可以重复的次数,当为N时,表示从起始顶点可以经过当前step 1-N 次
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- with_ring:Boolean值,true表示包含环路;false表示不包含环路,默认为false
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.16.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/templatepaths
-
Request Body
{
+
3.2.15.3 适用场景
适合查找各种复杂的路径集合,例如:
- 社交网络中,查找看过张艺谋所导演的电影的用户关注的大V的路径(张艺谋—>电影—->用户—>大V)
- 风控网络中,查找多个高风险用户的直系亲属的朋友的路径(高风险用户—>直系亲属—>朋友)
3.2.16 Template Paths
3.2.16.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_times:当前step可以重复的次数,当为N时,表示从起始顶点可以经过当前step 1-N 次
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- with_ring:Boolean值,true表示包含环路;false表示不包含环路,默认为false
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.16.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/templatepaths
+
Request Body
{
"sources": {
"ids": [],
"label": "person",
@@ -4299,8 +4301,8 @@
}
]
}
-
3.2.16.3 适用场景
适合查找各种复杂的模板路径,比如personA -(朋友)-> personB -(同学)-> personC,其中"朋友"和"同学"边可以分别是最多3层和4层的情况
3.2.17 Crosspoints
3.2.17.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找相交点
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点到目的顶点的方向, 目的点到起始点是反方向,BOTH时不考虑方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的交点的最大数目,选填项,默认为10
3.2.17.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/crosspoints?source="2:lop"&target="2:ripple"&max_depth=5&direction=IN
-
Response Status
200
+
3.2.16.3 适用场景
适合查找各种复杂的模板路径,比如personA -(朋友)-> personB -(同学)-> personC,其中"朋友"和"同学"边可以分别是最多3层和4层的情况
3.2.17 Crosspoints
3.2.17.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找相交点
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点到目的顶点的方向, 目的点到起始点是反方向,BOTH时不考虑方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的交点的最大数目,选填项,默认为10
3.2.17.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/crosspoints?source="2:lop"&target="2:ripple"&max_depth=5&direction=IN
+
Response Status
200
Response Body
{
"crosspoints":[
{
@@ -4313,8 +4315,8 @@
}
]
}
-
3.2.17.3 适用场景
查找两个顶点的交点及其路径,例如:
- 社交网络中,查找两个用户共同关注的话题或者大V
- 家族关系中,查找共同的祖先
3.2.18 Customized Crosspoints
3.2.18.1 功能介绍
根据一批起始顶点、多种边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径终点的交集
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
path_patterns:表示从起始顶点走过的路径规则,是一组规则的列表。必填项。每个规则是一个PathPattern
- 每个PathPattern是一组Step列表,每个Step结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的路径的最大数目,选填项,默认为10
with_path:true表示返回交点所在的路径,false表示不返回交点所在的路径,选填项,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有交点的完整信息
- false时表示只返回顶点id
3.2.18.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedcrosspoints
-
Request Body
{
+
3.2.17.3 适用场景
查找两个顶点的交点及其路径,例如:
- 社交网络中,查找两个用户共同关注的话题或者大V
- 家族关系中,查找共同的祖先
3.2.18 Customized Crosspoints
3.2.18.1 功能介绍
根据一批起始顶点、多种边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径终点的交集
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
path_patterns:表示从起始顶点走过的路径规则,是一组规则的列表。必填项。每个规则是一个PathPattern
- 每个PathPattern是一组Step列表,每个Step结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的路径的最大数目,选填项,默认为10
with_path:true表示返回交点所在的路径,false表示不返回交点所在的路径,选填项,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有交点的完整信息
- false时表示只返回顶点id
3.2.18.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedcrosspoints
+
Request Body
{
"sources":{
"ids":[
"2:lop",
@@ -4436,8 +4438,8 @@
}
]
}
-
3.2.18.3 适用场景
查询一组顶点通过多种路径在终点有交集的情况。例如:
- 在商品图谱中,多款手机、学习机、游戏机通过不同的低级别的类目路径,最终都属于一级类目的电子设备
3.2.19 Rings
3.2.19.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找可达的环路
例如:1 -> 25 -> 775 -> 14690 -> 25, 其中环路为 25 -> 775 -> 14690 -> 25
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- source_in_ring:环路是否包含起点,选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的可达环路的最大数目,选填项,默认为10
3.2.19.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rings?source="1:marko"&max_depth=2
-
Response Status
200
+
3.2.18.3 适用场景
查询一组顶点通过多种路径在终点有交集的情况。例如:
- 在商品图谱中,多款手机、学习机、游戏机通过不同的低级别的类目路径,最终都属于一级类目的电子设备
3.2.19 Rings
3.2.19.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找可达的环路
例如:1 -> 25 -> 775 -> 14690 -> 25, 其中环路为 25 -> 775 -> 14690 -> 25
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- source_in_ring:环路是否包含起点,选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的可达环路的最大数目,选填项,默认为10
3.2.19.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rings?source="1:marko"&max_depth=2
+
Response Status
200
Response Body
{
"rings":[
{
@@ -4463,8 +4465,8 @@
}
]
}
-
3.2.19.3 适用场景
查询起始顶点可达的环路,例如:
- 风控项目中,查询一个用户可达的循环担保的人或者设备
- 设备关联网络中,发现一个设备周围的循环引用的设备
3.2.20 Rays
3.2.20.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找发散到边界顶点的路径
例如:1 -> 25 -> 775 -> 14690 -> 2289 -> 18379, 其中 18379 为边界顶点,即没有从 18379 发出的边
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的非环路的最大数目,选填项,默认为10
3.2.20.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rays?source="1:marko"&max_depth=2&direction=OUT
-
Response Status
200
+
3.2.19.3 适用场景
查询起始顶点可达的环路,例如:
- 风控项目中,查询一个用户可达的循环担保的人或者设备
- 设备关联网络中,发现一个设备周围的循环引用的设备
3.2.20 Rays
3.2.20.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找发散到边界顶点的路径
例如:1 -> 25 -> 775 -> 14690 -> 2289 -> 18379, 其中 18379 为边界顶点,即没有从 18379 发出的边
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的非环路的最大数目,选填项,默认为10
3.2.20.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rays?source="1:marko"&max_depth=2&direction=OUT
+
Response Status
200
Response Body
{
"rays":[
{
@@ -4495,8 +4497,8 @@
}
]
}
-
3.2.20.3 适用场景
查找起始顶点到某种关系的边界顶点的路径,例如:
- 家族关系中,查找一个人到所有还没有孩子的子孙的路径
- 设备关联网络中,找到某个设备到终端设备的路径
3.2.21 Fusiform Similarity
3.2.21.1 功能介绍
按照条件查询一批顶点对应的"梭形相似点"。当两个顶点跟很多共同的顶点之间有某种关系的时候,我们认为这两个点为"梭形相似点"。举个例子说明"梭形相似点":“读者A"读了100本书,可以定义读过这100本书中的80本以上的读者,是"读者A"的"梭形相似点”
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
label:边的类型,选填项,默认代表所有edge label
direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
min_neighbors:最少邻居数目,邻居数目少于这个阈值时,认为起点不具备"梭形相似点"。比如想要找一个"读者A"读过的书的"梭形相似点",那么min_neighbors
为100时,表示"读者A"至少要读过100本书才可以有"梭形相似点",必填项
alpha:相似度,代表:起点与"梭形相似点"的共同邻居数目占起点的全部邻居数目的比例,必填项
min_similars:“梭形相似点"的最少个数,只有当起点的"梭形相似点"数目大于或等于该值时,才会返回起点及其"梭形相似点”,选填项,默认值为1
top:返回一个起点的"梭形相似点"中相似度最高的top个,必填项,0表示全部
group_property:与min_groups
一起使用,当起点跟其所有的"梭形相似点"某个属性的值有至少min_groups
个不同值时,才会返回该起点及其"梭形相似点"。比如为"读者A"推荐"异地"书友时,需要设置group_property
为读者的"城市"属性,min_group
至少为2,选填项,不填代表不需要根据属性过滤
min_groups:与group_property
一起使用,只有group_property
设置时才有意义
max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的结果数目上限(一个起点及其"梭形相似点"算一个结果),选填项,默认为10
with_intermediary:是否返回起点及其"梭形相似点"共同关联的中间点,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息
- false时表示只返回顶点id
3.2.21.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/fusiformsimilarity
-
Request Body
{
+
3.2.20.3 适用场景
查找起始顶点到某种关系的边界顶点的路径,例如:
- 家族关系中,查找一个人到所有还没有孩子的子孙的路径
- 设备关联网络中,找到某个设备到终端设备的路径
3.2.21 Fusiform Similarity
3.2.21.1 功能介绍
按照条件查询一批顶点对应的"梭形相似点"。当两个顶点跟很多共同的顶点之间有某种关系的时候,我们认为这两个点为"梭形相似点"。举个例子说明"梭形相似点":“读者A"读了100本书,可以定义读过这100本书中的80本以上的读者,是"读者A"的"梭形相似点”
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
label:边的类型,选填项,默认代表所有edge label
direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
min_neighbors:最少邻居数目,邻居数目少于这个阈值时,认为起点不具备"梭形相似点"。比如想要找一个"读者A"读过的书的"梭形相似点",那么min_neighbors
为100时,表示"读者A"至少要读过100本书才可以有"梭形相似点",必填项
alpha:相似度,代表:起点与"梭形相似点"的共同邻居数目占起点的全部邻居数目的比例,必填项
min_similars:“梭形相似点"的最少个数,只有当起点的"梭形相似点"数目大于或等于该值时,才会返回起点及其"梭形相似点”,选填项,默认值为1
top:返回一个起点的"梭形相似点"中相似度最高的top个,必填项,0表示全部
group_property:与min_groups
一起使用,当起点跟其所有的"梭形相似点"某个属性的值有至少min_groups
个不同值时,才会返回该起点及其"梭形相似点"。比如为"读者A"推荐"异地"书友时,需要设置group_property
为读者的"城市"属性,min_group
至少为2,选填项,不填代表不需要根据属性过滤
min_groups:与group_property
一起使用,只有group_property
设置时才有意义
max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的结果数目上限(一个起点及其"梭形相似点"算一个结果),选填项,默认为10
with_intermediary:是否返回起点及其"梭形相似点"共同关联的中间点,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息
- false时表示只返回顶点id
3.2.21.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/fusiformsimilarity
+
Request Body
{
"sources":{
"ids":[],
"label": "person",
@@ -4566,8 +4568,8 @@
}
]
}
-
3.2.21.3 适用场景
查询一组顶点相似度很高的顶点。例如:
- 跟一个读者有类似书单的读者
- 跟一个玩家玩类似游戏的玩家
3.2.22 Vertices
3.2.22.1 根据顶点的id列表,批量查询顶点
Params
- ids:要查询的顶点id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices?ids="1:marko"&ids="2:lop"
-
Response Status
200
+
3.2.21.3 适用场景
查询一组顶点相似度很高的顶点。例如:
- 跟一个读者有类似书单的读者
- 跟一个玩家玩类似游戏的玩家
3.2.22 Vertices
3.2.22.1 根据顶点的id列表,批量查询顶点
Params
- ids:要查询的顶点id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices?ids="1:marko"&ids="2:lop"
+
Response Status
200
Response Body
{
"vertices":[
{
@@ -4622,8 +4624,8 @@
}
]
}
-
3.2.22.2 获取顶点 Shard 信息
通过指定的分片大小split_size,获取顶点分片信息(可以与 3.2.21.3 中的 Scan 配合使用来获取顶点)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/shards?split_size=67108864
-
Response Status
200
+
3.2.22.2 获取顶点 Shard 信息
通过指定的分片大小split_size,获取顶点分片信息(可以与 3.2.21.3 中的 Scan 配合使用来获取顶点)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/shards?split_size=67108864
+
Response Status
200
Response Body
{
"shards":[
{
@@ -4649,8 +4651,8 @@
......
]
}
-
3.2.22.3 根据Shard信息批量获取顶点
通过指定的分片信息批量查询顶点(Shard信息的获取参见 3.2.21.2 Shard)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取顶点时,一页中顶点数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/scan?start=0&end=4294967295
-
Response Status
200
+
3.2.22.3 根据Shard信息批量获取顶点
通过指定的分片信息批量查询顶点(Shard信息的获取参见 3.2.21.2 Shard)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取顶点时,一页中顶点数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/scan?start=0&end=4294967295
+
Response Status
200
Response Body
{
"vertices":[
{
@@ -4805,8 +4807,8 @@
}
]
}
-
3.2.22.4 适用场景
- 按id列表查询顶点,可用于批量查询顶点,比如在path查询到多条路径之后,可以进一步查询某条路径的所有顶点属性。
- 获取分片和按分片查询顶点,可以用来遍历全部顶点
3.2.23 Edges
3.2.23.1 根据边的id列表,批量查询边
Params
- ids:要查询的边id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges?ids="S1:josh>1>>S2:lop"&ids="S1:josh>1>>S2:ripple"
-
Response Status
200
+
3.2.22.4 适用场景
- 按id列表查询顶点,可用于批量查询顶点,比如在path查询到多条路径之后,可以进一步查询某条路径的所有顶点属性。
- 获取分片和按分片查询顶点,可以用来遍历全部顶点
3.2.23 Edges
3.2.23.1 根据边的id列表,批量查询边
Params
- ids:要查询的边id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges?ids="S1:josh>1>>S2:lop"&ids="S1:josh>1>>S2:ripple"
+
Response Status
200
Response Body
{
"edges": [
{
@@ -4837,8 +4839,8 @@
}
]
}
-
3.2.23.2 获取边 Shard 信息
通过指定的分片大小split_size,获取边分片信息(可以与 3.2.22.3 中的 Scan 配合使用来获取边)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/shards?split_size=4294967295
-
Response Status
200
+
3.2.23.2 获取边 Shard 信息
通过指定的分片大小split_size,获取边分片信息(可以与 3.2.22.3 中的 Scan 配合使用来获取边)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/shards?split_size=4294967295
+
Response Status
200
Response Body
{
"shards":[
{
@@ -4868,8 +4870,8 @@
}
]
}
-
3.2.23.3 根据 Shard 信息批量获取边
通过指定的分片信息批量查询边(Shard信息的获取参见 3.2.22.2)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取边时,一页中边数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/scan?start=0&end=3221225469
-
Response Status
200
+
3.2.23.3 根据 Shard 信息批量获取边
通过指定的分片信息批量查询边(Shard信息的获取参见 3.2.22.2)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取边时,一页中边数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/scan?start=0&end=3221225469
+
Response Status
200
Response Body
{
"edges":[
{
@@ -5036,8 +5038,8 @@
}
]
}
-
注意将映射文件中input.path
的值修改为自己本地的路径。
4.2.1.1 功能介绍
适用于二分图,给出所有源顶点相关的其他顶点及其相关性组成的列表。
二分图:也称二部图,是图论里的一种特殊模型,也是一种特殊的网络流。其最大的特点在于,可以将图里的顶点分为两个集合,两个集合之间的点有边相连,但集合内的点之间没有直接关联。
假设有一个用户和物品的二分图,基于随机游走的 PersonalRank 算法步骤如下:
- 选定一个起点用户 u,其初始权重为 1.0,从 Vu 开始游走(有 alpha 的概率走到邻居点,1 - alpha 的概率停留);
- 如果决定向外游走, 那么会选取某一个类型的出边, 例如
rating
来查找共同的打分人:- 那就从当前节点的邻居节点中按照均匀分布随机选择一个,并且按照均匀分布划分权重值;
- 给源顶点补偿权重 1 - alpha;
- 重复步骤2;
- 达到一定步数或达到精度后收敛,得到推荐列表。
Params
必填项:
- source: 源顶点 id
- label: 源点出发的某类边 label,须连接两类不同顶点
选填项:
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,取值区间为 (0, 1], 默认值
0.85
- max_degree: 查询过程中,单个顶点遍历的最大邻接边数目,默认为
10000
- max_depth: 迭代次数,取值区间为 [2, 50], 默认值
5
- with_label:筛选结果中保留哪些结果,可选以下三类, 默认为
BOTH_LABEL
- SAME_LABEL:仅保留与源顶点相同类别的顶点
- OTHER_LABEL:仅保留与源顶点不同类别(二分图的另一端)的顶点
- BOTH_LABEL:同时保留与源顶点相同和相反类别的顶点
- limit: 返回的顶点的最大数目,默认为
100
- max_diff: 提前收敛的精度差, 默认为
0.0001
(后续实现) - sorted:返回的结果是否根据 rank 排序,为 true 时降序排列,反之不排序,默认为
true
4.2.1.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
-
Request Body
{
+
注意将映射文件中input.path
的值修改为自己本地的路径。
4.2.1.1 功能介绍
适用于二分图,给出所有源顶点相关的其他顶点及其相关性组成的列表。
二分图:也称二部图,是图论里的一种特殊模型,也是一种特殊的网络流。其最大的特点在于,可以将图里的顶点分为两个集合,两个集合之间的点有边相连,但集合内的点之间没有直接关联。
假设有一个用户和物品的二分图,基于随机游走的 PersonalRank 算法步骤如下:
- 选定一个起点用户 u,其初始权重为 1.0,从 Vu 开始游走(有 alpha 的概率走到邻居点,1 - alpha 的概率停留);
- 如果决定向外游走, 那么会选取某一个类型的出边, 例如
rating
来查找共同的打分人:- 那就从当前节点的邻居节点中按照均匀分布随机选择一个,并且按照均匀分布划分权重值;
- 给源顶点补偿权重 1 - alpha;
- 重复步骤2;
- 达到一定步数或达到精度后收敛,得到推荐列表。
Params
必填项:
- source: 源顶点 id
- label: 源点出发的某类边 label,须连接两类不同顶点
选填项:
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,取值区间为 (0, 1], 默认值
0.85
- max_degree: 查询过程中,单个顶点遍历的最大邻接边数目,默认为
10000
- max_depth: 迭代次数,取值区间为 [2, 50], 默认值
5
- with_label:筛选结果中保留哪些结果,可选以下三类, 默认为
BOTH_LABEL
- SAME_LABEL:仅保留与源顶点相同类别的顶点
- OTHER_LABEL:仅保留与源顶点不同类别(二分图的另一端)的顶点
- BOTH_LABEL:同时保留与源顶点相同和相反类别的顶点
- limit: 返回的顶点的最大数目,默认为
100
- max_diff: 提前收敛的精度差, 默认为
0.0001
(后续实现) - sorted:返回的结果是否根据 rank 排序,为 true 时降序排列,反之不排序,默认为
true
4.2.1.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
+
Request Body
{
"source": "1:1",
"label": "rating",
"alpha": 0.6,
@@ -5139,8 +5141,8 @@
}
}
4.2.2.1 功能介绍
在一般图结构中,找出每一层与给定起点相关性最高的前 N 个顶点及其相关度,用图的语义理解就是:从起点往外走,
-走到各层各个顶点的概率。
Params
- source: 源顶点 id,必填项
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,必填项,取值区间为 (0, 1]
- steps: 表示从起始顶点走过的路径规则,是一组 Step 的列表,每个 Step 对应结果中的一层,必填项。每个 Step 的结构如下:
- direction:表示边的方向(OUT, IN, BOTH),默认是 BOTH
- labels:边的类型列表,多个边类型取并集
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- top:在结果中每一层只保留权重最高的前 N 个结果,默认为 100,最大值为 1000
- capacity: 遍历过程中最大的访问的顶点数目,选填项,默认为10000000
4.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
-
Request Body
{
+走到各层各个顶点的概率。Params
- source: 源顶点 id,必填项
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,必填项,取值区间为 (0, 1]
- steps: 表示从起始顶点走过的路径规则,是一组 Step 的列表,每个 Step 对应结果中的一层,必填项。每个 Step 的结构如下:
- direction:表示边的方向(OUT, IN, BOTH),默认是 BOTH
- labels:边的类型列表,多个边类型取并集
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- top:在结果中每一层只保留权重最高的前 N 个结果,默认为 100,最大值为 1000
- capacity: 遍历过程中最大的访问的顶点数目,选填项,默认为10000000
4.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
+
Request Body
{
"source":"O",
"steps":[
{
@@ -5198,122 +5200,122 @@
}
]
}
-
4.2.2.3 适用场景
为给定的起点在不同的层中找到最应该推荐的顶点。
- 比如:在观众、朋友、电影、导演的四层图结构中,根据某个观众的朋友们喜欢的电影,为这个观众推荐电影;或者根据这些电影是谁拍的,为其推荐导演。
5.1.11 - Variable API
5.1 Variables
Variables可以用来存储有关整个图的数据,数据按照键值对的方式存取
5.1.1 创建或者更新某个键值对
Method & Url
PUT http://localhost:8080/graphs/hugegraph/variables/name
-
Request Body
{
+
4.2.2.3 适用场景
为给定的起点在不同的层中找到最应该推荐的顶点。
- 比如:在观众、朋友、电影、导演的四层图结构中,根据某个观众的朋友们喜欢的电影,为这个观众推荐电影;或者根据这些电影是谁拍的,为其推荐导演。
5.1.11 - Variable API
5.1 Variables
Variables可以用来存储有关整个图的数据,数据按照键值对的方式存取
5.1.1 创建或者更新某个键值对
Method & Url
PUT http://localhost:8080/graphs/hugegraph/variables/name
+
Request Body
{
"data": "tom"
}
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.2 列出全部键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables
-
Response Status
200
+
5.1.2 列出全部键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables
+
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.3 列出某个键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables/name
-
Response Status
200
+
5.1.3 列出某个键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables/name
+
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.4 删除某个键值对
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/variables/name
-
Response Status
204
-
5.1.12 - Graphs API
6.1 Graphs
6.1.1 列出数据库中全部的图
Method & Url
GET http://localhost:8080/graphs
-
Response Status
200
+
5.1.4 删除某个键值对
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/variables/name
+
Response Status
204
+
5.1.12 - Graphs API
6.1 Graphs
6.1.1 列出数据库中全部的图
Method & Url
GET http://localhost:8080/graphs
+
Response Status
200
Response Body
{
"graphs": [
"hugegraph",
"hugegraph1"
]
}
-
6.1.2 查看某个图的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph
-
Response Status
200
+
6.1.2 查看某个图的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph
+
Response Status
200
Response Body
{
"name": "hugegraph",
"backend": "cassandra"
}
-
6.1.3 清空某个图的全部数据,包括schema、vertex、edge和index等,该操作需要管理员权限
Params
由于清空图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to delete all data
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/clear?confirm_message=I%27m+sure+to+delete+all+data
-
Response Status
204
-
6.1.4 克隆一个图,该操作需要管理员权限
Params
- clone_graph_name: 已有图的名称;从已有的图来克隆,用户可选择传递配置文件,传递时将替换已有图中的配置;
Method & Url
POST http://localhost:8080/graphs/hugegraph_clone?clone_graph_name=hugegraph
-
Request Body 【可选】
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-backend=rocksdb
-serializer=binary
-store=hugegraph_clone
-rocksdb.data_path=./hg2
-rocksdb.wal_path=./hg2
-
Response Status
200
+
6.1.3 清空某个图的全部数据,包括schema、vertex、edge和index等,该操作需要管理员权限
Params
由于清空图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to delete all data
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/clear?confirm_message=I%27m+sure+to+delete+all+data
+
Response Status
204
+
6.1.4 克隆一个图,该操作需要管理员权限
Params
- clone_graph_name: 已有图的名称;从已有的图来克隆,用户可选择传递配置文件,传递时将替换已有图中的配置;
Method & Url
POST http://localhost:8080/graphs/hugegraph_clone?clone_graph_name=hugegraph
+
Request Body 【可选】
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+backend=rocksdb
+serializer=binary
+store=hugegraph_clone
+rocksdb.data_path=./hg2
+rocksdb.wal_path=./hg2
+
Response Status
200
Response Body
{
"name": "hugegraph_clone",
"backend": "rocksdb"
}
-
6.1.5 创建一个图,该操作需要管理员权限
Method & Url
POST http://localhost:8080/graphs/hugegraph2
-
Request Body
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-backend=rocksdb
-serializer=binary
-store=hugegraph2
-rocksdb.data_path=./hg2
-rocksdb.wal_path=./hg2
-
Response Status
200
+
6.1.5 创建一个图,该操作需要管理员权限
Method & Url
POST http://localhost:8080/graphs/hugegraph2
+
Request Body
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+backend=rocksdb
+serializer=binary
+store=hugegraph2
+rocksdb.data_path=./hg2
+rocksdb.wal_path=./hg2
+
Response Status
200
Response Body
{
"name": "hugegraph2",
"backend": "rocksdb"
}
-
6.1.6 删除某个图及其全部数据
Params
由于删除图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to drop the graph
Method & Url
DELETE http://localhost:8080/graphs/hugegraph_clone?confirm_message=I%27m%20sure%20to%20drop%20the%20graph
-
Response Status
204
-
6.2 Conf
6.2.1 查看某个图的配置,该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/hugegraph/conf
-
Response Status
200
-
Response Body
# gremlin entrence to create graph
-gremlin.graph=com.baidu.hugegraph.HugeFactory
-
-# cache config
-#schema.cache_capacity=1048576
-#graph.cache_capacity=10485760
-#graph.cache_expire=600
-
-# schema illegal name template
-#schema.illegal_name_regex=\s+|~.*
-
-#vertex.default_label=vertex
-
-backend=cassandra
-serializer=cassandra
-
-store=hugegraph
-...
-
6.3 Mode
合法的图模式包括:NONE,RESTORING,MERGING,LOADING
- None 模式(默认),元数据和图数据的写入属于正常状态。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- LOADING:批量导入数据时自动启用,特别的:
- 添加顶点/边时,不会检查必填属性是否传入
Restore 时存在两种不同的模式: Restoring 和 Merging
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
6.3.1 查看某个图的模式.
Method & Url
GET http://localhost:8080/graphs/hugegraph/mode
-
Response Status
200
+
6.1.6 删除某个图及其全部数据
Params
由于删除图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to drop the graph
Method & Url
DELETE http://localhost:8080/graphs/hugegraph_clone?confirm_message=I%27m%20sure%20to%20drop%20the%20graph
+
Response Status
204
+
6.2 Conf
6.2.1 查看某个图的配置,该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/hugegraph/conf
+
Response Status
200
+
Response Body
# gremlin entrence to create graph
+gremlin.graph=com.baidu.hugegraph.HugeFactory
+
+# cache config
+#schema.cache_capacity=1048576
+#graph.cache_capacity=10485760
+#graph.cache_expire=600
+
+# schema illegal name template
+#schema.illegal_name_regex=\s+|~.*
+
+#vertex.default_label=vertex
+
+backend=cassandra
+serializer=cassandra
+
+store=hugegraph
+...
+
6.3 Mode
合法的图模式包括:NONE,RESTORING,MERGING,LOADING
- None 模式(默认),元数据和图数据的写入属于正常状态。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- LOADING:批量导入数据时自动启用,特别的:
- 添加顶点/边时,不会检查必填属性是否传入
Restore 时存在两种不同的模式: Restoring 和 Merging
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
6.3.1 查看某个图的模式.
Method & Url
GET http://localhost:8080/graphs/hugegraph/mode
+
Response Status
200
Response Body
{
"mode": "NONE"
}
-
合法的图模式包括:NONE,RESTORING,MERGING
6.3.2 设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/hugegraph/mode
-
Request Body
"RESTORING"
-
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
+
合法的图模式包括:NONE,RESTORING,MERGING
6.3.2 设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/hugegraph/mode
+
Request Body
"RESTORING"
+
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
Response Body
{
"mode": "RESTORING"
}
-
6.3.3 查看某个图的读模式.
Params
- name: 图的名称
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph_read_mode
-
Response Status
200
+
6.3.3 查看某个图的读模式.
Params
- name: 图的名称
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph_read_mode
+
Response Status
200
Response Body
{
"graph_read_mode": "ALL"
}
-
6.3.4 设置某个图的读模式. 该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
-
Request Body
"OLTP_ONLY"
-
合法的图模式包括:ALL,OLTP_ONLY,OLAP_ONLY
Response Status
200
+
6.3.4 设置某个图的读模式. 该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
+
Request Body
"OLTP_ONLY"
+
合法的图模式包括:ALL,OLTP_ONLY,OLAP_ONLY
Response Status
200
Response Body
{
"graph_read_mode": "OLTP_ONLY"
}
-
6.4 Snapshot
6.4.1 创建快照
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_create
-
Response Status
200
+
6.4 Snapshot
6.4.1 创建快照
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_create
+
Response Status
200
Response Body
{
"hugegraph": "snapshot_created"
}
-
6.4.2 快照恢复
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_resume
-
Response Status
200
+
6.4.2 快照恢复
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_resume
+
Response Status
200
Response Body
{
"hugegraph": "snapshot_resumed"
}
-
6.5 Compact
6.5.1 手动压缩图,该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/compact
-
Response Status
200
+
6.5 Compact
6.5.1 手动压缩图,该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/compact
+
Response Status
200
Response Body
{
"nodes": 1,
"cluster_id": "local",
@@ -5321,8 +5323,8 @@
"local": "OK"
}
}
-
5.1.13 - Task API
7.1 Task
7.1.1 列出某个图中全部的异步任务
Params
- status: 异步任务的状态
- limit:返回异步任务数目上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks?status=success
-
Response Status
200
+
5.1.13 - Task API
7.1 Task
7.1.1 列出某个图中全部的异步任务
Params
- status: 异步任务的状态
- limit:返回异步任务数目上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks?status=success
+
Response Status
200
Response Body
{
"tasks": [{
"task_name": "hugegraph.traversal().V()",
@@ -5338,8 +5340,8 @@
"task_input": "{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}]
}
-
7.1.2 查看某个异步任务的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks/2
-
Response Status
200
+
7.1.2 查看某个异步任务的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks/2
+
Response Status
200
Response Body
{
"task_name": "hugegraph.traversal().V()",
"task_progress": 0,
@@ -5353,8 +5355,8 @@
"task_callable": "com.baidu.hugegraph.api.job.GremlinAPI$GremlinJob",
"task_input": "{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}
-
7.1.3 删除某个异步任务信息,不删除异步任务本身
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/tasks/2
-
Response Status
204
+
7.1.3 删除某个异步任务信息,不删除异步任务本身
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/tasks/2
+
Response Status
204
7.1.4 取消某个异步任务,该异步任务必须具有处理中断的能力
假设已经通过Gremlin API创建了一个异步任务如下:
"for (int i = 0; i < 10; i++) {" +
"hugegraph.addVertex(T.label, 'man');" +
"hugegraph.tx().commit();" +
@@ -5364,13 +5366,13 @@
"break;" +
"}" +
"}"
-
Method & Url
PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
-
请保证在10秒内发送该请求,如果超过10秒发送,任务可能已经执行完成,无法取消。
Response Status
202
+
Method & Url
PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
+
请保证在10秒内发送该请求,如果超过10秒发送,任务可能已经执行完成,无法取消。
Response Status
202
Response Body
{
"cancelled": true
}
-
此时查询 label 为 man 的顶点数目,一定是小于 10 的。
5.1.14 - Gremlin API
8.1 Gremlin
8.1.1 向HugeGraphServer发送gremlin语句(GET),同步执行
Params
- gremlin: 要发送给
HugeGraphServer
执行的gremlin
语句 - bindings: 用来绑定参数,key是字符串,value是绑定的值(只能是字符串或者数字),功能类似于MySQL的 Prepared Statement,用于加速语句执行
- language: 发送语句的语言类型,默认为
gremlin-groovy
- aliases: 为存在于图空间的已有变量添加别名
查询顶点
Method & Url
GET http://127.0.0.1:8080/gremlin?gremlin=hugegraph.traversal().V('1:marko')
-
Response Status
200
+
此时查询 label 为 man 的顶点数目,一定是小于 10 的。
5.1.14 - Gremlin API
8.1 Gremlin
8.1.1 向HugeGraphServer发送gremlin语句(GET),同步执行
Params
- gremlin: 要发送给
HugeGraphServer
执行的gremlin
语句 - bindings: 用来绑定参数,key是字符串,value是绑定的值(只能是字符串或者数字),功能类似于MySQL的 Prepared Statement,用于加速语句执行
- language: 发送语句的语言类型,默认为
gremlin-groovy
- aliases: 为存在于图空间的已有变量添加别名
查询顶点
Method & Url
GET http://127.0.0.1:8080/gremlin?gremlin=hugegraph.traversal().V('1:marko')
+
Response Status
200
Response Body
{
"requestId": "c6ef47a8-b634-4b07-9d38-6b3b69a3a556",
"status": {
@@ -5401,8 +5403,8 @@
"meta": {}
}
}
-
8.1.2 向HugeGraphServer发送gremlin语句(POST),同步执行
Method & Url
POST http://localhost:8080/gremlin
-
查询顶点
Request Body
{
+
8.1.2 向HugeGraphServer发送gremlin语句(POST),同步执行
Method & Url
POST http://localhost:8080/gremlin
+
查询顶点
Request Body
{
"gremlin": "hugegraph.traversal().V('1:marko')",
"bindings": {},
"language": "gremlin-groovy",
@@ -5476,8 +5478,8 @@
"meta": {}
}
}
-
8.1.3 向HugeGraphServer发送gremlin语句(POST),异步执行
Method & Url
POST http://localhost:8080/graphs/hugegraph/jobs/gremlin
-
查询顶点
Request Body
{
+
8.1.3 向HugeGraphServer发送gremlin语句(POST),异步执行
Method & Url
POST http://localhost:8080/graphs/hugegraph/jobs/gremlin
+
查询顶点
Request Body
{
"gremlin": "g.V('1:marko')",
"bindings": {},
"language": "gremlin-groovy",
@@ -5509,8 +5511,8 @@
"user_phone": "182****9088",
"user_email": "123@xx.com"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/users
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/users
+
Response Status
201
Response Body
返回报文中,密码为加密后的密文
{
"user_password": "******",
"user_email": "123@xx.com",
@@ -5521,11 +5523,11 @@
"id": "-63:boss",
"user_create": "2020-11-17 14:31:07.833"
}
-
9.2.2 删除用户
Params
- id: 需要删除的用户 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/users/-63:test
-
Response Status
204
+
9.2.2 删除用户
Params
- id: 需要删除的用户 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/users/-63:test
+
Response Status
204
Response Body
1
-
9.2.3 修改用户
Params
- id: 需要修改的用户 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test
-
Request Body
修改user_name、user_password和user_phone
{
+
9.2.3 修改用户
Params
- id: 需要修改的用户 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test
+
Request Body
修改user_name、user_password和user_phone
{
"user_name": "test",
"user_password": "******",
"user_phone": "183****9266"
@@ -5540,8 +5542,8 @@
"id": "-63:test",
"user_create": "2020-11-12 10:27:13.601"
}
-
9.2.4 查询用户列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users
-
Response Status
200
+
9.2.4 查询用户列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users
+
Response Status
200
Response Body
{
"users": [
{
@@ -5554,8 +5556,8 @@
}
]
}
-
9.2.5 查询某个用户
Params
- id: 需要查询的用户 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:admin
-
Response Status
200
+
9.2.5 查询某个用户
Params
- id: 需要查询的用户 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:admin
+
Response Status
200
Response Body
{
"users": [
{
@@ -5568,8 +5570,8 @@
}
]
}
-
9.2.6 查询某个用户的角色
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:boss/role
-
Response Status
200
+
9.2.6 查询某个用户的角色
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:boss/role
+
Response Status
200
Response Body
{
"roles": {
"hugegraph": {
@@ -5587,8 +5589,8 @@
"group_name": "all",
"group_description": "group can do anything"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/groups
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/groups
+
Response Status
201
Response Body
{
"group_creator": "admin",
"group_name": "all",
@@ -5597,11 +5599,11 @@
"id": "-69:all",
"group_description": "group can do anything"
}
-
9.3.2 删除用户组
Params
- id: 需要删除的用户组 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
-
Response Status
204
+
9.3.2 删除用户组
Params
- id: 需要删除的用户组 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
+
Response Status
204
Response Body
1
-
9.3.3 修改用户组
Params
- id: 需要修改的用户组 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
-
Request Body
修改group_description
{
+
9.3.3 修改用户组
Params
- id: 需要修改的用户组 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
+
Request Body
修改group_description
{
"group_name": "grant",
"group_description": "grant"
}
@@ -5614,8 +5616,8 @@
"id": "-69:grant",
"group_description": "grant"
}
-
9.3.4 查询用户组列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups
-
Response Status
200
+
9.3.4 查询用户组列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups
+
Response Status
200
Response Body
{
"groups": [
{
@@ -5628,8 +5630,8 @@
}
]
}
-
9.3.5 查询某个用户组
Params
- id: 需要查询的用户组 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all
-
Response Status
200
+
9.3.5 查询某个用户组
Params
- id: 需要查询的用户组 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all
+
Response Status
200
Response Body
{
"group_creator": "admin",
"group_name": "all",
@@ -5650,8 +5652,8 @@
}
]
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/targets
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/targets
+
Response Status
201
Response Body
{
"target_creator": "admin",
"target_name": "all",
@@ -5668,11 +5670,11 @@
"id": "-77:all",
"target_update": "2020-11-11 15:32:01.192"
}
-
9.4.2 删除资源
Params
- id: 需要删除的资源 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
-
Response Status
204
+
9.4.2 删除资源
Params
- id: 需要删除的资源 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
+
Response Status
204
Response Body
1
-
9.4.3 修改资源
Params
- id: 需要修改的资源 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
-
Request Body
修改资源定义中的type
{
+
9.4.3 修改资源
Params
- id: 需要修改的资源 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
+
Request Body
修改资源定义中的type
{
"target_name": "gremlin",
"target_graph": "hugegraph",
"target_url": "127.0.0.1:8080",
@@ -5699,8 +5701,8 @@
"id": "-77:gremlin",
"target_update": "2020-11-12 09:37:12.780"
}
-
9.4.4 查询资源列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets
-
Response Status
200
+
9.4.4 查询资源列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets
+
Response Status
200
Response Body
{
"targets": [
{
@@ -5737,8 +5739,8 @@
}
]
}
-
9.4.5 查询某个资源
Params
- id: 需要查询的资源 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets/-77:grant
-
Response Status
200
+
9.4.5 查询某个资源
Params
- id: 需要查询的资源 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets/-77:grant
+
Response Status
200
Response Body
{
"target_creator": "admin",
"target_name": "grant",
@@ -5759,8 +5761,8 @@
"user": "-63:boss",
"group": "-69:all"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/belongs
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/belongs
+
Response Status
201
Response Body
{
"belong_create": "2020-11-11 16:19:35.422",
"belong_creator": "admin",
@@ -5769,11 +5771,11 @@
"user": "-63:boss",
"group": "-69:all"
}
-
9.5.2 删除关联角色
Params
- id: 需要删除的关联角色 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
-
Response Status
204
+
9.5.2 删除关联角色
Params
- id: 需要删除的关联角色 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
+
Response Status
204
Response Body
1
-
9.5.3 修改关联角色
关联角色只能修改描述,不能修改 user 和 group 属性,如果需要修改关联角色,需要删除原来关联关系,新增关联角色。
Params
- id: 需要修改的关联角色 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
-
Request Body
修改belong_description
{
+
9.5.3 修改关联角色
关联角色只能修改描述,不能修改 user 和 group 属性,如果需要修改关联角色,需要删除原来关联关系,新增关联角色。
Params
- id: 需要修改的关联角色 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
+
Request Body
修改belong_description
{
"belong_description": "update test"
}
Response Status
200
@@ -5786,8 +5788,8 @@
"user": "-63:boss",
"group": "-69:grant"
}
-
9.5.4 查询关联角色列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs
-
Response Status
200
+
9.5.4 查询关联角色列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs
+
Response Status
200
Response Body
{
"belongs": [
{
@@ -5800,8 +5802,8 @@
}
]
}
-
9.5.5 查看某个关联角色
Params
- id: 需要查询的关联角色 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all
-
Response Status
200
+
9.5.5 查看某个关联角色
Params
- id: 需要查询的关联角色 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all
+
Response Status
200
Response Body
{
"belong_create": "2020-11-11 16:19:35.422",
"belong_creator": "admin",
@@ -5815,8 +5817,8 @@
"target": "-77:all",
"access_permission": "READ"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/accesses
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/accesses
+
Response Status
201
Response Body
{
"access_permission": "READ",
"access_create": "2020-11-11 15:54:54.008",
@@ -5826,11 +5828,11 @@
"group": "-69:all",
"target": "-77:all"
}
-
9.6.2 删除赋权
Params
- id: 需要删除的赋权 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
-
Response Status
204
+
9.6.2 删除赋权
Params
- id: 需要删除的赋权 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
+
Response Status
204
Response Body
1
-
9.6.3 修改赋权
赋权只能修改描述,不能修改用户组、资源和权限许可,如果需要修改赋权的关系,可以删除原来的赋权关系,新增赋权。
Params
- id: 需要修改的赋权 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
-
Request Body
修改access_description
{
+
9.6.3 修改赋权
赋权只能修改描述,不能修改用户组、资源和权限许可,如果需要修改赋权的关系,可以删除原来的赋权关系,新增赋权。
Params
- id: 需要修改的赋权 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
+
Request Body
修改access_description
{
"access_description": "test"
}
Response Status
200
@@ -5844,8 +5846,8 @@
"group": "-69:all",
"target": "-77:all"
}
-
9.6.4 查询赋权列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses
-
Response Status
200
+
9.6.4 查询赋权列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses
+
Response Status
200
Response Body
{
"accesses": [
{
@@ -5859,8 +5861,8 @@
}
]
}
-
9.6.5 查询某个赋权
Params
- id: 需要查询的赋权 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>11>S-77:all
-
Response Status
200
+
9.6.5 查询某个赋权
Params
- id: 需要查询的赋权 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>11>S-77:all
+
Response Status
200
Response Body
{
"access_permission": "READ",
"access_create": "2020-11-11 15:54:54.008",
@@ -5870,8 +5872,8 @@
"group": "-69:all",
"target": "-77:all"
}
-
5.1.16 - Other API
10.1 Other
10.1.1 查看HugeGraph的版本信息
Method & Url
GET http://localhost:8080/versions
-
Response Status
200
+
5.1.16 - Other API
10.1 Other
10.1.1 查看HugeGraph的版本信息
Method & Url
GET http://localhost:8080/versions
+
Response Status
200
Response Body
{
"versions": {
"version": "v1",
@@ -6473,28 +6475,28 @@
该命令用于查看当前图模式,包括:NONE、RESTORING、MERGING。
bin/hugegraph graph-mode-set -m RESTORING
该命令用于设置图模式,Restore 之前可以设置成 RESTORING 或者 MERGING 模式,例子中设置成 RESTORING。
步骤2:Restore 数据
bin/hugegraph restore -t all -d data
该命令将data目录下的全部元数据和图数据重新导入到 http://127.0.0.1 的 hugegraph 图中。
步骤3:恢复图模式
bin/hugegraph graph-mode-set -m NONE
-
该命令用于恢复图模式为 NONE。
至此,一次完整的图备份和图恢复流程结束。
帮助
备份和恢复命令的详细使用方式可以参考hugegraph-tools文档。
Backup/Restore使用和实现的API说明
Backup
Backup 使用元数据
和图数据
的相应的 list(GET) API 导出,并未增加新的 API。
Restore
Restore 使用元数据
和图数据
的相应的 create(POST) API 导入,并未增加新的 API。
Restore 时存在两种不同的模式: Restoring 和 Merging,另外,还有常规模式 NONE(默认),区别如下:
- None 模式,元数据和图数据的写入属于正常状态,可参见功能说明。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
实现的设置图模式的 RESTful API 如下:
查看某个图的模式. 该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/{graph}/mode
-
Response Status
200
+
该命令用于恢复图模式为 NONE。
至此,一次完整的图备份和图恢复流程结束。
帮助
备份和恢复命令的详细使用方式可以参考hugegraph-tools文档。
Backup/Restore使用和实现的API说明
Backup
Backup 使用元数据
和图数据
的相应的 list(GET) API 导出,并未增加新的 API。
Restore
Restore 使用元数据
和图数据
的相应的 create(POST) API 导入,并未增加新的 API。
Restore 时存在两种不同的模式: Restoring 和 Merging,另外,还有常规模式 NONE(默认),区别如下:
- None 模式,元数据和图数据的写入属于正常状态,可参见功能说明。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
实现的设置图模式的 RESTful API 如下:
查看某个图的模式. 该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/{graph}/mode
+
Response Status
200
Response Body
{
"mode": "NONE"
}
-
合法的图模式包括:NONE,RESTORING,MERGING
设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/{graph}/mode
-
Request Body
"RESTORING"
-
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
+
合法的图模式包括:NONE,RESTORING,MERGING
设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/{graph}/mode
+
Request Body
"RESTORING"
+
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
Response Body
{
"mode": "RESTORING"
}
-
6.5 - FAQ
如何选择后端存储? 选 RocksDB 还是 Cassandra 还是 Hbase 还是 Mysql?
根据你的具体需要来判断, 一般单机或数据量 < 100 亿推荐 RocksDB, 其他推荐使用分布式存储的后端集群
启动服务时提示:xxx (core dumped) xxx
请检查JDK版本是否为 Java11 (至少是Java8)
启动服务成功了,但是操作图时有类似于"无法连接到后端或连接未打开"的提示
第一次启动服务前,需要先使用init-store
初始化后端,后续版本会将提示得更清晰直接。
所有的后端在使用前都需要执行init-store
吗,序列化的选择可以随意填写么?
除了memory
不需要,其他后端均需要,如:cassandra
、hbase
和rocksdb
等,序列化需一一对应不可随意填写。
执行init-store
报错:Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni3226083071221514754.so: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by /tmp/librocksdbjni3226083071221514754.so)
RocksDB需要 gcc 4.3.0 (GLIBCXX_3.4.10) 及以上版本
执行init-store.sh
时报错:NoHostAvailableException
NoHostAvailableException
是指无法连接到Cassandra
服务,如果确定是要使用cassandra
后端,请先安装并启动这个服务。至于这个提示本身可能不够直白,我们会更新到文档进行说明的。
bin
目录下包含start-hugegraph.sh
、start-restserver.sh
和start-gremlinserver.sh
三个似乎与启动有关的脚本,到底该使用哪个
自0.3.3版本以来,已经把 GremlinServer 和 RestServer 合并为 HugeGraphServer 了,使用start-hugegraph.sh
启动即可,后两个在后续版本会被删掉。
配置了两个图,名字是hugegraph
和hugegraph1
,而启动服务的命令是start-hugegraph.sh
,是只打开了hugegraph
这个图吗
start-hugegraph.sh
会打开所有gremlin-server.yaml
的graphs
下的图,这二者并无名字上的直接关系
服务启动成功后,使用curl
查询所有顶点时返回乱码
服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 gunzip
进行解压(curl http://example | gunzip
),也可以用Firefox
的postman
或者Chrome
浏览器的restlet
插件发请求,会自动解压缩响应数据。
使用顶点Id通过RESTful API
查询顶点时返回空,但是顶点确实是存在的
检查顶点Id的类型,如果是字符串类型,API
的url
中的id部分需要加上双引号,数字类型则不用加。
已经根据需要给顶点Id加上了双引号,但是通过RESTful API
查询顶点时仍然返回空
检查顶点id中是否包含+
、空格
、/
、?
、%
、&
和=
这些URL的保留字符,如果存在则需要进行编码。下表给出了编码值:
特殊字符 | 编码值
---------| ----
-+ | %2B
-空格 | %20
-/ | %2F
-? | %3F
-% | %25
-# | %23
-& | %26
-= | %3D
-
查询某一类别的顶点或边(query by label
)时提示超时
由于属于某一label的数据量可能比较多,请加上limit限制。
通过RESTful API
操作图是可以的,但是发送Gremlin
语句就报错:Request Failed(500)
可能是GremlinServer
的配置有误,检查gremlin-server.yaml
的host
、port
是否与rest-server.properties
的gremlinserver.url
匹配,如不匹配则修改,然后重启服务。
使用Loader
导数据出现Socket Timeout
异常,然后导致Loader
中断
持续地导入数据会使Server
的压力过大,然后导致有些请求超时。可以通过调整Loader
的参数来适当缓解Server
压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。
如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremlin
的g.V().drop()
会报错Vertices in transaction have reached capacity xxx
目前确实没有好办法删除全部的数据,用户如果是自己部署的Server
和后端,可以直接清空数据库,重启Server
。可以使用paging API或scan API先获取所有数据,再逐条删除。
清空了数据库,并且执行了init-store
,但是添加schema
时提示"xxx has existed"
HugeGraphServer
内是有缓存的,清空数据库的同时是需要重启Server
的,否则残留的缓存会产生不一致。
插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}
或 Big id max length is 32768, but got xxx
为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。
是否支持嵌套属性,如果不支持,是否有什么替代方案
嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。
一个EdgeLabel
是否可以连接多对VertexLabel
,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"
一个EdgeLabel
不支持连接多对VertexLabel
,需要用户将EdgeLabel
拆分得更细一点,如:“个人投资”,“企业投资”。
通过RestAPI
发送请求时提示HTTP 415 Unsupported Media Type
请求头中需要指定Content-Type:application/json
其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues
7 - QUERY LANGUAGE
7.1 - HugeGraph Gremlin
概述
HugeGraph支持Apache TinkerPop3的图形遍历查询语言Gremlin。 SQL是关系型数据库查询语言,而Gremlin是一种通用的图数据库查询语言,Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,也可执行图的查询操作。
Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,更主要的是可用于执行图的查询及分析操作。
TinkerPop Features
HugeGraph实现了TinkerPop框架,但是并没有实现TinkerPop所有的特性。
下表列出HugeGraph对TinkerPop各种特性的支持情况:
Graph Features
Name Description Support Computer Determines if the {@code Graph} implementation supports {@link GraphComputer} based processing false Transactions Determines if the {@code Graph} implementations supports transactions. true Persistence Determines if the {@code Graph} implementation supports persisting it’s contents natively to disk.This feature does not refer to every graph’s ability to write to disk via the Gremlin IO packages(.e.g. GraphML), unless the graph natively persists to disk via those options somehow. For example,TinkerGraph does not support this feature as it is a pure in-sideEffects graph. true ThreadedTransactions Determines if the {@code Graph} implementation supports threaded transactions which allow a transaction be executed across multiple threads via {@link Transaction#createThreadedTx()}. false ConcurrentAccess Determines if the {@code Graph} implementation supports more than one connection to the same instance at the same time. For example, Neo4j embedded does not support this feature because concurrent access to the same database files by multiple instances is not possible. However, Neo4j HA could support this feature as each new {@code Graph} instance coordinates with the Neo4j cluster allowing multiple instances to operate on the same database. false
Vertex Features
Name Description Support UserSuppliedIds Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept. false NumericIds Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false StringIds Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false UuidIds Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false CustomIds Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false AnyIds Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}. false AddProperty Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}. true RemoveProperty Determines if an {@link Element} allows properties to be removed. true AddVertices Determines if a {@link Vertex} can be added to the {@code Graph}. true MultiProperties Determines if a {@link Vertex} can support multiple properties with the same key. false DuplicateMultiProperties Determines if a {@link Vertex} can support non-unique values on the same key. For this value to be {@code true}, then {@link #supportsMetaProperties()} must also return true. By default this method, just returns what {@link #supportsMultiProperties()} returns. false MetaProperties Determines if a {@link Vertex} can support properties on vertex properties. It is assumed that a graph will support all the same data types for meta-properties that are supported for regular properties. false RemoveVertices Determines if a {@link Vertex} can be removed from the {@code Graph}. true
Edge Features
Name Description Support UserSuppliedIds Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept. false NumericIds Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false StringIds Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false UuidIds Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false CustomIds Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false AnyIds Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}. false AddProperty Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}. true RemoveProperty Determines if an {@link Element} allows properties to be removed. true AddEdges Determines if an {@link Edge} can be added to a {@code Vertex}. true RemoveEdges Determines if an {@link Edge} can be removed from a {@code Vertex}. true
Data Type Features
Name Description Support BooleanValues true ByteValues true DoubleValues true FloatValues true IntegerValues true LongValues true MapValues Supports setting of a {@code Map} value. The assumption is that the {@code Map} can contain arbitrary serializable values that may or may not be defined as a feature itself false MixedListValues Supports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “mixed” it does not need to contain objects of the same type. false BooleanArrayValues false ByteArrayValues true DoubleArrayValues false FloatArrayValues false IntegerArrayValues false LongArrayValues false SerializableValues false StringArrayValues false StringValues true UniformListValues Supports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “uniform” it must contain objects of the same type. false
Gremlin的步骤
HugeGraph支持Gremlin的所有步骤。有关Gremlin的完整参考信息,请参与Gremlin官网。
步骤 说明 文档 addE 在两个顶点之间添加边 addE step addV 将顶点添加到图形 addV step and 确保所有遍历都返回值 and step as 用于向步骤的输出分配变量的步骤调制器 as step by 与group
和order
配合使用的步骤调制器 by step coalesce 返回第一个返回结果的遍历 coalesce step constant 返回常量值。 与coalesce
配合使用 constant step count 从遍历返回计数 count step dedup 返回已删除重复内容的值 dedup step drop 丢弃值(顶点/边缘) drop step fold 充当用于计算结果聚合值的屏障 fold step group 根据指定的标签将值分组 group step has 用于筛选属性、顶点和边缘。 支持hasLabel
、hasId
、hasNot
和 has
变体 has step inject 将值注入流中 inject step is 用于通过布尔表达式执行筛选器 is step limit 用于限制遍历中的项数 limit step local 本地包装遍历的某个部分,类似于子查询 local step not 用于生成筛选器的求反结果 not step optional 如果生成了某个结果,则返回指定遍历的结果,否则返回调用元素 optional step or 确保至少有一个遍历会返回值 or step order 按指定的排序顺序返回结果 order step path 返回遍历的完整路径 path step project 将属性投影为映射 project step properties 返回指定标签的属性 properties step range 根据指定的值范围进行筛选 range step repeat 将步骤重复指定的次数。 用于循环 repeat step sample 用于对遍历返回的结果采样 sample step select 用于投影遍历返回的结果 select step store 用于遍历返回的非阻塞聚合 store step tree 将顶点中的路径聚合到树中 tree step unfold 将迭代器作为步骤展开 unfold step union 合并多个遍历返回的结果 union step V 包括顶点与边之间的遍历所需的步骤:V
、E
、out
、in
、both
、outE
、inE
、bothE
、outV
、inV
、bothV
和 otherV
order step where 用于筛选遍历返回的结果。 支持 eq
、neq
、lt
、lte
、gt
、gte
和 between
运算符 where step
7.2 - HugeGraph Examples
1 概述
本示例将TitanDB Getting Started 为模板来演示HugeGraph的使用方法。通过对比HugeGraph和TitanDB,了解HugeGraph和TitanDB的差异。
1.1 HugeGraph与TitanDB的异同
HugeGraph和TitanDB都是基于Apache TinkerPop3框架的图数据库,均支持Gremlin图查询语言,在使用方法和接口方面具有很多相似的地方。然而HugeGraph是全新设计开发的,其代码结构清晰,功能较为丰富,接口更为友好等特点。
HugeGraph相对于TitanDB而言,其主要特点如下:
- HugeGraph目前有HugeGraph-API、HugeGraph-Client、HugeGraph-Loader、HugeGraph-Studio、HugeGraph-Spark等完善的工具组件,可以完成系统集成、数据载入、图可视化查询、Spark 连接等功能;
- HugeGraph具有Server和Client的概念,第三方系统可以通过jar引用、client、api等多种方式接入,而TitanDB仅支持jar引用方式接入。
- HugeGraph的Schema需要显式定义,所有的插入和查询均需要通过严格的schema校验,目前暂不支持schema的隐式创建。
- HugeGraph充分利用后端存储系统的特点来实现数据高效存取,而TitanDB以统一的Kv结构无视后端的差异性。
- HugeGraph的更新操作可以实现按需操作(例如:更新某个属性)性能更好。TitanDB的更新是read and update方式。
- HugeGraph的VertexId和EdgeId均支持拼接,可实现自动去重,同时查询性能更好。TitanDB的所有Id均是自动生成,查询需要经索引。
1.2 人物关系图谱
本示例通过Property Graph Model图数据模型来描述希腊神话中各人物角色的关系(也被成为人物关系图谱),具体关系详见下图。
其中,圆形节点代表实体(Vertex),箭头代表关系(Edge),方框的内容为属性。
该关系图谱中有两类顶点,分别是人物(character)和位置(location)如下表:
名称 类型 属性 character vertex name,age,type location vertex name
有六种关系,分别是父子(father)、母子(mother)、兄弟(brother)、战斗(battled)、居住(lives)、拥有宠物(pet) 关于关系图谱的具体信息如下:
名称 类型 source vertex label target vertex label 属性 father edge character character - mother edge character character - brother edge character character - pet edge character character - lives edge character location reason
在HugeGraph中,每个edge label只能作用于一对source vertex label和target vertex label。也就是说,如果一个图内定义了一种关系father连接character和character,那farther就不能再连接其他的vertex labels。
因此本例子将原TitanDB中的monster, god, human, demigod均使用相同的vertex label: character
来表示, 同时增加属性type来标识人物的类型。edge label
与原TitanDB保持一致。当然为了满足edge label
约束,也可以通过调整edge label
的name
来实现。
2 Graph Schema and Data Ingest Examples
HugeGraph需要显示创建Schema,因此需要依次创建PropertyKey、VertexLabel、EdgeLabel,如果有需要索引还需要创建IndexLabel。
2.1 Graph Schema
schema = hugegraph.schema()
+
6.5 - FAQ
如何选择后端存储? 选 RocksDB 还是 Cassandra 还是 Hbase 还是 Mysql?
根据你的具体需要来判断, 一般单机或数据量 < 100 亿推荐 RocksDB, 其他推荐使用分布式存储的后端集群
启动服务时提示:xxx (core dumped) xxx
请检查JDK版本是否为 Java11 (至少是Java8)
启动服务成功了,但是操作图时有类似于"无法连接到后端或连接未打开"的提示
第一次启动服务前,需要先使用init-store
初始化后端,后续版本会将提示得更清晰直接。
所有的后端在使用前都需要执行init-store
吗,序列化的选择可以随意填写么?
除了memory
不需要,其他后端均需要,如:cassandra
、hbase
和rocksdb
等,序列化需一一对应不可随意填写。
执行init-store
报错:Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni3226083071221514754.so: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by /tmp/librocksdbjni3226083071221514754.so)
RocksDB需要 gcc 4.3.0 (GLIBCXX_3.4.10) 及以上版本
执行init-store.sh
时报错:NoHostAvailableException
NoHostAvailableException
是指无法连接到Cassandra
服务,如果确定是要使用cassandra
后端,请先安装并启动这个服务。至于这个提示本身可能不够直白,我们会更新到文档进行说明的。
bin
目录下包含start-hugegraph.sh
、start-restserver.sh
和start-gremlinserver.sh
三个似乎与启动有关的脚本,到底该使用哪个
自0.3.3版本以来,已经把 GremlinServer 和 RestServer 合并为 HugeGraphServer 了,使用start-hugegraph.sh
启动即可,后两个在后续版本会被删掉。
配置了两个图,名字是hugegraph
和hugegraph1
,而启动服务的命令是start-hugegraph.sh
,是只打开了hugegraph
这个图吗
start-hugegraph.sh
会打开所有gremlin-server.yaml
的graphs
下的图,这二者并无名字上的直接关系
服务启动成功后,使用curl
查询所有顶点时返回乱码
服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 gunzip
进行解压(curl http://example | gunzip
),也可以用Firefox
的postman
或者Chrome
浏览器的restlet
插件发请求,会自动解压缩响应数据。
使用顶点Id通过RESTful API
查询顶点时返回空,但是顶点确实是存在的
检查顶点Id的类型,如果是字符串类型,API
的url
中的id部分需要加上双引号,数字类型则不用加。
已经根据需要给顶点Id加上了双引号,但是通过RESTful API
查询顶点时仍然返回空
检查顶点id中是否包含+
、空格
、/
、?
、%
、&
和=
这些URL的保留字符,如果存在则需要进行编码。下表给出了编码值:
特殊字符 | 编码值
+--------| ----
++ | %2B
+空格 | %20
+/ | %2F
+? | %3F
+% | %25
+# | %23
+& | %26
+= | %3D
+
查询某一类别的顶点或边(query by label
)时提示超时
由于属于某一label的数据量可能比较多,请加上limit限制。
通过RESTful API
操作图是可以的,但是发送Gremlin
语句就报错:Request Failed(500)
可能是GremlinServer
的配置有误,检查gremlin-server.yaml
的host
、port
是否与rest-server.properties
的gremlinserver.url
匹配,如不匹配则修改,然后重启服务。
使用Loader
导数据出现Socket Timeout
异常,然后导致Loader
中断
持续地导入数据会使Server
的压力过大,然后导致有些请求超时。可以通过调整Loader
的参数来适当缓解Server
压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。
如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremlin
的g.V().drop()
会报错Vertices in transaction have reached capacity xxx
目前确实没有好办法删除全部的数据,用户如果是自己部署的Server
和后端,可以直接清空数据库,重启Server
。可以使用paging API或scan API先获取所有数据,再逐条删除。
清空了数据库,并且执行了init-store
,但是添加schema
时提示"xxx has existed"
HugeGraphServer
内是有缓存的,清空数据库的同时是需要重启Server
的,否则残留的缓存会产生不一致。
插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}
或 Big id max length is 32768, but got xxx
为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。
是否支持嵌套属性,如果不支持,是否有什么替代方案
嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。
一个EdgeLabel
是否可以连接多对VertexLabel
,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"
一个EdgeLabel
不支持连接多对VertexLabel
,需要用户将EdgeLabel
拆分得更细一点,如:“个人投资”,“企业投资”。
通过RestAPI
发送请求时提示HTTP 415 Unsupported Media Type
请求头中需要指定Content-Type:application/json
其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues
7 - QUERY LANGUAGE
7.1 - HugeGraph Gremlin
概述
HugeGraph支持Apache TinkerPop3的图形遍历查询语言Gremlin。 SQL是关系型数据库查询语言,而Gremlin是一种通用的图数据库查询语言,Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,也可执行图的查询操作。
Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,更主要的是可用于执行图的查询及分析操作。
TinkerPop Features
HugeGraph实现了TinkerPop框架,但是并没有实现TinkerPop所有的特性。
下表列出HugeGraph对TinkerPop各种特性的支持情况:
Graph Features
Name Description Support Computer Determines if the {@code Graph} implementation supports {@link GraphComputer} based processing false Transactions Determines if the {@code Graph} implementations supports transactions. true Persistence Determines if the {@code Graph} implementation supports persisting it’s contents natively to disk.This feature does not refer to every graph’s ability to write to disk via the Gremlin IO packages(.e.g. GraphML), unless the graph natively persists to disk via those options somehow. For example,TinkerGraph does not support this feature as it is a pure in-sideEffects graph. true ThreadedTransactions Determines if the {@code Graph} implementation supports threaded transactions which allow a transaction be executed across multiple threads via {@link Transaction#createThreadedTx()}. false ConcurrentAccess Determines if the {@code Graph} implementation supports more than one connection to the same instance at the same time. For example, Neo4j embedded does not support this feature because concurrent access to the same database files by multiple instances is not possible. However, Neo4j HA could support this feature as each new {@code Graph} instance coordinates with the Neo4j cluster allowing multiple instances to operate on the same database. false
Vertex Features
Name Description Support UserSuppliedIds Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept. false NumericIds Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false StringIds Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false UuidIds Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false CustomIds Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false AnyIds Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}. false AddProperty Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}. true RemoveProperty Determines if an {@link Element} allows properties to be removed. true AddVertices Determines if a {@link Vertex} can be added to the {@code Graph}. true MultiProperties Determines if a {@link Vertex} can support multiple properties with the same key. false DuplicateMultiProperties Determines if a {@link Vertex} can support non-unique values on the same key. For this value to be {@code true}, then {@link #supportsMetaProperties()} must also return true. By default this method, just returns what {@link #supportsMultiProperties()} returns. false MetaProperties Determines if a {@link Vertex} can support properties on vertex properties. It is assumed that a graph will support all the same data types for meta-properties that are supported for regular properties. false RemoveVertices Determines if a {@link Vertex} can be removed from the {@code Graph}. true
Edge Features
Name Description Support UserSuppliedIds Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept. false NumericIds Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false StringIds Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false UuidIds Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false CustomIds Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false AnyIds Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}. false AddProperty Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}. true RemoveProperty Determines if an {@link Element} allows properties to be removed. true AddEdges Determines if an {@link Edge} can be added to a {@code Vertex}. true RemoveEdges Determines if an {@link Edge} can be removed from a {@code Vertex}. true
Data Type Features
Name Description Support BooleanValues true ByteValues true DoubleValues true FloatValues true IntegerValues true LongValues true MapValues Supports setting of a {@code Map} value. The assumption is that the {@code Map} can contain arbitrary serializable values that may or may not be defined as a feature itself false MixedListValues Supports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “mixed” it does not need to contain objects of the same type. false BooleanArrayValues false ByteArrayValues true DoubleArrayValues false FloatArrayValues false IntegerArrayValues false LongArrayValues false SerializableValues false StringArrayValues false StringValues true UniformListValues Supports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “uniform” it must contain objects of the same type. false
Gremlin的步骤
HugeGraph支持Gremlin的所有步骤。有关Gremlin的完整参考信息,请参与Gremlin官网。
步骤 说明 文档 addE 在两个顶点之间添加边 addE step addV 将顶点添加到图形 addV step and 确保所有遍历都返回值 and step as 用于向步骤的输出分配变量的步骤调制器 as step by 与group
和order
配合使用的步骤调制器 by step coalesce 返回第一个返回结果的遍历 coalesce step constant 返回常量值。 与coalesce
配合使用 constant step count 从遍历返回计数 count step dedup 返回已删除重复内容的值 dedup step drop 丢弃值(顶点/边缘) drop step fold 充当用于计算结果聚合值的屏障 fold step group 根据指定的标签将值分组 group step has 用于筛选属性、顶点和边缘。 支持hasLabel
、hasId
、hasNot
和 has
变体 has step inject 将值注入流中 inject step is 用于通过布尔表达式执行筛选器 is step limit 用于限制遍历中的项数 limit step local 本地包装遍历的某个部分,类似于子查询 local step not 用于生成筛选器的求反结果 not step optional 如果生成了某个结果,则返回指定遍历的结果,否则返回调用元素 optional step or 确保至少有一个遍历会返回值 or step order 按指定的排序顺序返回结果 order step path 返回遍历的完整路径 path step project 将属性投影为映射 project step properties 返回指定标签的属性 properties step range 根据指定的值范围进行筛选 range step repeat 将步骤重复指定的次数。 用于循环 repeat step sample 用于对遍历返回的结果采样 sample step select 用于投影遍历返回的结果 select step store 用于遍历返回的非阻塞聚合 store step tree 将顶点中的路径聚合到树中 tree step unfold 将迭代器作为步骤展开 unfold step union 合并多个遍历返回的结果 union step V 包括顶点与边之间的遍历所需的步骤:V
、E
、out
、in
、both
、outE
、inE
、bothE
、outV
、inV
、bothV
和 otherV
order step where 用于筛选遍历返回的结果。 支持 eq
、neq
、lt
、lte
、gt
、gte
和 between
运算符 where step
7.2 - HugeGraph Examples
1 概述
本示例将TitanDB Getting Started 为模板来演示HugeGraph的使用方法。通过对比HugeGraph和TitanDB,了解HugeGraph和TitanDB的差异。
1.1 HugeGraph与TitanDB的异同
HugeGraph和TitanDB都是基于Apache TinkerPop3框架的图数据库,均支持Gremlin图查询语言,在使用方法和接口方面具有很多相似的地方。然而HugeGraph是全新设计开发的,其代码结构清晰,功能较为丰富,接口更为友好等特点。
HugeGraph相对于TitanDB而言,其主要特点如下:
- HugeGraph目前有HugeGraph-API、HugeGraph-Client、HugeGraph-Loader、HugeGraph-Studio、HugeGraph-Spark等完善的工具组件,可以完成系统集成、数据载入、图可视化查询、Spark 连接等功能;
- HugeGraph具有Server和Client的概念,第三方系统可以通过jar引用、client、api等多种方式接入,而TitanDB仅支持jar引用方式接入。
- HugeGraph的Schema需要显式定义,所有的插入和查询均需要通过严格的schema校验,目前暂不支持schema的隐式创建。
- HugeGraph充分利用后端存储系统的特点来实现数据高效存取,而TitanDB以统一的Kv结构无视后端的差异性。
- HugeGraph的更新操作可以实现按需操作(例如:更新某个属性)性能更好。TitanDB的更新是read and update方式。
- HugeGraph的VertexId和EdgeId均支持拼接,可实现自动去重,同时查询性能更好。TitanDB的所有Id均是自动生成,查询需要经索引。
1.2 人物关系图谱
本示例通过Property Graph Model图数据模型来描述希腊神话中各人物角色的关系(也被成为人物关系图谱),具体关系详见下图。
其中,圆形节点代表实体(Vertex),箭头代表关系(Edge),方框的内容为属性。
该关系图谱中有两类顶点,分别是人物(character)和位置(location)如下表:
名称 类型 属性 character vertex name,age,type location vertex name
有六种关系,分别是父子(father)、母子(mother)、兄弟(brother)、战斗(battled)、居住(lives)、拥有宠物(pet) 关于关系图谱的具体信息如下:
名称 类型 source vertex label target vertex label 属性 father edge character character - mother edge character character - brother edge character character - pet edge character character - lives edge character location reason
在HugeGraph中,每个edge label只能作用于一对source vertex label和target vertex label。也就是说,如果一个图内定义了一种关系father连接character和character,那farther就不能再连接其他的vertex labels。
因此本例子将原TitanDB中的monster, god, human, demigod均使用相同的vertex label: character
来表示, 同时增加属性type来标识人物的类型。edge label
与原TitanDB保持一致。当然为了满足edge label
约束,也可以通过调整edge label
的name
来实现。
2 Graph Schema and Data Ingest Examples
HugeGraph需要显示创建Schema,因此需要依次创建PropertyKey、VertexLabel、EdgeLabel,如果有需要索引还需要创建IndexLabel。
2.1 Graph Schema
schema = hugegraph.schema()
schema.propertyKey("name").asText().ifNotExist().create()
schema.propertyKey("age").asInt().ifNotExist().create()
@@ -6562,9 +6564,9 @@
// what is the name of the brother and the name of the place?
g.V(pluto).out('brother').as('god').out('lives').as('place').select('god','place').by('name')
-
推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。
3.2 总结
HugeGraph 目前支持 Gremlin
的语法,用户可以通过 Gremlin / REST-API
实现各种查询需求。
8 - PERFORMANCE
8.1 - HugeGraph BenchMark Performance
1 测试环境
1.1 硬件信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD
1.2 软件信息
1.2.1 测试用例
测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:
Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交
Single Insertion,单条插入,每个顶点或者每条边立即提交
Query,主要是图数据库的基本查询操作:
- Find Neighbors,查询所有顶点的邻居
- Find Adjacent Nodes,查询所有边的邻接顶点
- Find Shortest Path,查询第一个顶点到100个随机顶点的最短路径
Clustering,基于Louvain Method的社区发现算法
1.2.2 测试数据集
测试使用人造数据和真实数据
MIW、SIW和QW使用SNAP数据集
CW使用LFR-Benchmark generator生成的人造数据
本测试用到的数据集规模
名称 vertex数目 edge数目 文件大小 email-enron.txt 36,691 367,661 4MB com-youtube.ungraph.txt 1,157,806 2,987,624 38.7MB amazon0601.txt 403,393 3,387,388 47.9MB com-lj.ungraph.txt 3997961 34681189 479MB
1.3 服务配置
HugeGraph版本:0.5.6,RestServer和Gremlin Server和backends都在同一台服务器上
- RocksDB版本:rocksdbjni-5.8.6
Titan版本:0.5.4, 使用thrift+Cassandra模式
- Cassandra版本:cassandra-3.10,commit-log 和 data 共用SSD
Neo4j版本:2.0.1
graphdb-benchmark适配的Titan版本为0.5.4
2 测试结果
2.1 Batch插入性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) com-lj.ungraph(3000w) HugeGraph 0.629 5.711 5.243 67.033 Titan 10.15 108.569 150.266 1217.944 Neo4j 3.884 18.938 24.890 281.537
说明
- 表头"()“中数据是数据规模,以边为单位
- 表中数据是批量插入的时间,单位是s
- 例如,HugeGraph使用RocksDB插入amazon0601数据集的300w条边,花费5.711s
结论
- 批量插入性能 HugeGraph(RocksDB) > Neo4j > Titan(thrift+Cassandra)
2.2 遍历性能
2.2.1 术语说明
- FN(Find Neighbor), 遍历所有vertex, 根据vertex查邻接edge, 通过edge和vertex查other vertex
- FA(Find Adjacent), 遍历所有edge,根据edge获得source vertex和target vertex
2.2.2 FN性能
Backend email-enron(3.6w) amazon0601(40w) com-youtube.ungraph(120w) com-lj.ungraph(400w) HugeGraph 4.072 45.118 66.006 609.083 Titan 8.084 92.507 184.543 1099.371 Neo4j 2.424 10.537 11.609 106.919
说明
- 表头”()“中数据是数据规模,以顶点为单位
- 表中数据是遍历顶点花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有顶点,并查找邻接边和另一顶点,总共耗时45.118s
2.2.3 FA性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) com-lj.ungraph(3000w) HugeGraph 1.540 10.764 11.243 151.271 Titan 7.361 93.344 169.218 1085.235 Neo4j 1.673 4.775 4.284 40.507
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是遍历边花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有边,并查询每条边的两个顶点,总共耗时10.764s
结论
- 遍历性能 Neo4j > HugeGraph(RocksDB) > Titan(thrift+Cassandra)
2.3 HugeGraph-图常用分析方法性能
术语说明
- FS(Find Shortest Path), 寻找最短路径
- K-neighbor,从起始vertex出发,通过K跳边能够到达的所有顶点, 包括1, 2, 3…(K-1), K跳边可达vertex
- K-out, 从起始vertex出发,恰好经过K跳out边能够到达的顶点
FS性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) com-lj.ungraph(3000w) HugeGraph 0.494 0.103 3.364 8.155 Titan 11.818 0.239 377.709 575.678 Neo4j 1.719 1.800 1.956 8.530
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是找到从第一个顶点出发到达随机选择的100个顶点的最短路径的时间,单位是s
- 例如,HugeGraph使用RocksDB后端在图amazon0601中查找第一个顶点到100个随机顶点的最短路径,总共耗时0.103s
结论
- 在数据规模小或者顶点关联关系少的场景下,HugeGraph性能优于Neo4j和Titan
- 随着数据规模增大且顶点的关联度增高,HugeGraph与Neo4j性能趋近,都远高于Titan
K-neighbor性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.031s 0.033s 0.048s 0.500s 11.27s OOM v111 时间 0.027s 0.034s 0.115 1.36s OOM – v1111 时间 0.039s 0.027s 0.052s 0.511s 10.96s OOM
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
K-out性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.054s 0.057s 0.109s 0.526s 3.77s OOM 度 10 133 2453 50,830 1,128,688 v111 时间 0.032s 0.042s 0.136s 1.25s 20.62s OOM 度 10 211 4944 113150 2,629,970 v1111 时间 0.039s 0.045s 0.053s 1.10s 2.92s OOM 度 10 140 2555 50825 1,070,230
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
结论
- FS场景,HugeGraph性能优于Neo4j和Titan
- K-neighbor和K-out场景,HugeGraph能够实现在5度范围内秒级返回结果
2.4 图综合性能测试-CW
数据库 规模1000 规模5000 规模10000 规模20000 HugeGraph(core) 20.804 242.099 744.780 1700.547 Titan 45.790 820.633 2652.235 9568.623 Neo4j 5.913 50.267 142.354 460.880
说明
- “规模"以顶点为单位
- 表中数据是社区发现完成需要的时间,单位是s,例如HugeGraph使用RocksDB后端在规模10000的数据集,社区聚合不再变化,需要耗时744.780s
- CW测试是CRUD的综合评估
- 该测试中HugeGraph跟Titan一样,没有通过client,直接对core操作
结论
- 社区聚类算法性能 Neo4j > HugeGraph > Titan
8.2 - HugeGraph-API Performance
HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:
- 顶点/边的单条插入
- 顶点/边的批量插入
- 顶点/边的查询
HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:
之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况
8.2.1 - v0.5.6 Stand-alone(RocksDB)
1 测试环境
被压机器信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD,2.7T HDD
- 起压力机器信息:与被压机器同配置
- 测试工具:apache-Jmeter-2.5.1
注:起压机器和被压机器在同一机房
2 测试说明
2.1 名词定义(时间的单位均为ms)
- Samples – 本次场景中一共完成了多少个线程
- Average – 平均响应时间
- Median – 统计意义上面的响应时间的中值
- 90% Line – 所有线程中90%的线程的响应时间都小于xx
- Min – 最小响应时间
- Max – 最大响应时间
- Error – 出错率
- Throughput – 吞吐量
- KB/sec – 以流量做衡量的吞吐量
2.2 底层存储
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
3 性能结果总结
- HugeGraph单条插入顶点和边的速度在每秒1w左右
- 顶点和边的批量插入速度远大于单条插入速度
- 按id查询顶点和边的并发度可达到13000以上,且请求的平均延时小于50ms
4 测试结果及分析
4.1 batch插入
4.1.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
持续时间:5min
顶点的最大插入速度:
####### 结论:
- 并发2200,顶点的吞吐量是2026.8,每秒可处理的数据:2026.8*200=405360/s
边的最大插入速度
####### 结论:
- 并发900,边的吞吐量是776.9,每秒可处理的数据:776.9*500=388450/s
4.2 single插入
4.2.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
顶点的单条插入
####### 结论:
- 并发11500,吞吐量为10730,顶点的单条插入并发能力为11500
边的单条插入
####### 结论:
- 并发9000,吞吐量是8418,边的单条插入并发能力为9000
4.3 按id查询
4.3.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
顶点的按id查询
####### 结论:
- 并发14000,吞吐量是12663,顶点的按id查询的并发能力为14000,平均延时为44ms
边的按id查询
####### 结论:
- 并发13000,吞吐量是12225,边的按id查询的并发能力为13000,平均延时为12ms
8.2.2 - v0.5.6 Cluster(Cassandra)
1 测试环境
被压机器信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD,2.7T HDD
- 起压力机器信息:与被压机器同配置
- 测试工具:apache-Jmeter-2.5.1
注:起压机器和被压机器在同一机房
2 测试说明
2.1 名词定义(时间的单位均为ms)
- Samples – 本次场景中一共完成了多少个线程
- Average – 平均响应时间
- Median – 统计意义上面的响应时间的中值
- 90% Line – 所有线程中90%的线程的响应时间都小于xx
- Min – 最小响应时间
- Max – 最大响应时间
- Error – 出错率
- Throughput – 吞吐量
- KB/sec – 以流量做衡量的吞吐量
2.2 底层存储
后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。
3 性能结果总结
- HugeGraph单条插入顶点和边的速度分别为9000和4500
- 顶点和边的批量插入速度分别为5w/s和15w/s,远大于单条插入速度
- 按id查询顶点和边的并发度可达到12000以上,且请求的平均延时小于70ms
4 测试结果及分析
4.1 batch插入
4.1.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
持续时间:5min
顶点的最大插入速度:
####### 结论:
- 并发3500,顶点的吞吐量是261,每秒可处理的数据:261*200=52200/s
边的最大插入速度
####### 结论:
- 并发1000,边的吞吐量是323,每秒可处理的数据:323*500=161500/s
4.2 single插入
4.2.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
顶点的单条插入
####### 结论:
- 并发9000,吞吐量为8400,顶点的单条插入并发能力为9000
边的单条插入
####### 结论:
- 并发4500,吞吐量是4160,边的单条插入并发能力为4500
4.3 按id查询
4.3.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
顶点的按id查询
####### 结论:
- 并发14500,吞吐量是13576,顶点的按id查询的并发能力为14500,平均延时为11ms
边的按id查询
####### 结论:
- 并发12000,吞吐量是10688,边的按id查询的并发能力为12000,平均延时为63ms
8.2.3 - v0.4.4
1 测试环境
被压机器信息
机器编号 CPU Memory 网卡 磁盘 1 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz 61G 1000Mbps 1.4T HDD 2 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD,2.7T HDD
- 起压力机器信息:与编号 1 机器同配置
- 测试工具:apache-Jmeter-2.5.1
注:起压机器和被压机器在同一机房
2 测试说明
2.1 名词定义(时间的单位均为ms)
- Samples – 本次场景中一共完成了多少个线程
- Average – 平均响应时间
- Median – 统计意义上面的响应时间的中值
- 90% Line – 所有线程中90%的线程的响应时间都小于xx
- Min – 最小响应时间
- Max – 最大响应时间
- Error – 出错率
- Throughput – 吞吐量
- KB/sec – 以流量做衡量的吞吐量
2.2 底层存储
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
3 性能结果总结
- HugeGraph每秒能够处理的请求数目上限是7000
- 批量插入速度远大于单条插入,在服务器上测试结果达到22w edges/s,37w vertices/s
- 后端是RocksDB,增大CPU数目和内存大小可以增大批量插入的性能。CPU和内存扩大一倍,性能增加45%-60%
- 批量插入场景,使用SSD替代HDD,性能提升较小,只有3%-5%
4 测试结果及分析
4.1 batch插入
4.1.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
持续时间:5min
顶点和边的最大插入速度(高性能服务器,使用SSD存储RocksDB数据):
结论:
- 并发1000,边的吞吐量是是451,每秒可处理的数据:451*500条=225500/s
- 并发2000,顶点的吞吐量是1842.4,每秒可处理的数据:1842.4*200=368480/s
1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)
结论:
- 同样使用HDD硬盘,CPU和内存增加了1倍
- 边:吞吐量从268提升至426,性能提升了约60%
- 顶点:吞吐量从1263.8提升至1842.4,性能提升了约45%
2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)
结论:
- 边:使用SSD吞吐量451.7,使用HDD吞吐量426.6,性能提升5%
- 顶点:使用SSD吞吐量1842.4,使用HDD吞吐量1794,性能提升约3%
3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)
结论:
- 顶点:1000并发,响应时间7ms和1500并发响应时间1028ms差距悬殊,且吞吐量一直保持在1300左右,因此拐点数据应该在1300 ,且并发1300时,响应时间已达到22ms,在可控范围内,相比HugeGraph 0.2(1000并发:平均响应时间8959ms),处理能力出现质的飞跃;
- 边:从1000并发到2000并发,处理时间过长,超过3s,且吞吐量几乎在270左右浮动,因此继续增大并发线程数吞吐量不会再大幅增长,270 是一个拐点,跟HugeGraph 0.2版本(1000并发:平均响应时间31849ms)相比较,处理能力提升非常明显;
4.2 single插入
4.2.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
结论:
- 顶点:
- 4000并发:正常,无错误率,平均耗时小于1ms, 6000并发无错误,平均耗时5ms,在可接受范围内;
- 8000并发:存在0.01%的错误,已经无法处理,出现connection timeout错误,顶峰应该在7000左右
- 边:
- 4000并发:响应时间1ms,6000并发无任何异常,平均响应时间8ms,主要差异在于 IO network recv和send以及CPU);
- 8000并发:存在0.01%的错误率,平均耗15ms,拐点应该在7000左右,跟顶点结果匹配;
8.2.4 - v0.2
1 测试环境
1.1 软硬件信息
起压和被压机器配置相同,基本参数如下:
CPU Memory 网卡 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz 61G 1000Mbps
测试工具:apache-Jmeter-2.5.1
1.2 服务配置
- HugeGraph版本:0.2
- 后端存储:使用服务内嵌的cassandra-3.10,单点部署;
- 后端配置修改:修改了cassandra.yaml文件中的以下两个属性,其余选项均保持默认
batch_size_warn_threshold_in_kb: 1000
- batch_size_fail_threshold_in_kb: 1000
-
- HugeGraphServer 与 HugeGremlinServer 与cassandra都在同一机器上启动,server 相关的配置文件除主机和端口有修改外,其余均保持默认。
1.3 名词解释
- Samples – 本次场景中一共完成了多少个线程
- Average – 平均响应时间
- Median – 统计意义上面的响应时间的中值
- 90% Line – 所有线程中90%的线程的响应时间都小于xx
- Min – 最小响应时间
- Max – 最大响应时间
- Error – 出错率
- Throughput – 吞吐量Â
- KB/sec – 以流量做衡量的吞吐量
注:时间的单位均为ms
2 测试结果
2.1 schema
Label Samples Average Median 90%Line Min Max Error% Throughput KB/sec property_keys 331000 1 1 2 0 172 0.00% 920.7/sec 178.1 vertex_labels 331000 1 2 2 1 126 0.00% 920.7/sec 193.4 edge_labels 331000 2 2 3 1 158 0.00% 920.7/sec 242.8
结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力
2.2 single 插入
2.2.1 插入速率测试
压力参数
测试方法:固定并发量,测试server和后端的处理速率
- 并发量:1000
- 持续时间:5min
性能指标
Label Samples Average Median 90%Line Min Max Error% Throughput KB/sec single_insert_vertices 331000 0 1 1 0 21 0.00% 920.7/sec 234.4 single_insert_edges 331000 2 2 3 1 53 0.00% 920.7/sec 309.1
结论
- 顶点:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
- 边:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
2.2.2 压力上限测试
测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
性能指标
Concurrency Samples Average Median 90%Line Min Max Error% Throughput KB/sec 2000(vertex) 661916 1 1 1 0 3012 0.00% 1842.9/sec 469.1 4000(vertex) 1316124 13 1 14 0 9023 0.00% 3673.1/sec 935.0 5000(vertex) 1468121 1010 1135 1227 0 9223 0.06% 4095.6/sec 1046.0 7000(vertex) 1378454 1617 1708 1886 0 9361 0.08% 3860.3/sec 987.1 2000(edge) 629399 953 1043 1113 1 9001 0.00% 1750.3/sec 587.6 3000(edge) 648364 2258 2404 2500 2 9001 0.00% 1810.7/sec 607.9 4000(edge) 649904 1992 2112 2211 1 9001 0.06% 1812.5/sec 608.5
结论
- 顶点:
- 4000并发:正常,无错误率,平均耗时13ms;
- 5000并发:每秒处理5000个数据的插入,就会存在0.06%的错误,应该已经处理不了了,顶峰应该在4000
- 边:
- 1000并发:响应时间2ms,跟2000并发的响应时间相差较多,主要是 IO network rec和send以及CPU几乎增加了一倍);
- 2000并发:每秒处理2000个数据的插入,平均耗时953ms,平均每秒处理1750个请求;
- 3000并发:每秒处理3000个数据的插入,平均耗时2258ms,平均每秒处理1810个请求;
- 4000并发:每秒处理4000个数据的插入,平均每秒处理1812个请求;
2.3 batch 插入
2.3.1 插入速率测试
压力参数
测试方法:固定并发量,测试server和后端的处理速率
- 并发量:1000
- 持续时间:5min
性能指标
Label Samples Average Median 90%Line Min Max Error% Throughput KB/sec batch_insert_vertices 37162 8959 9595 9704 17 9852 0.00% 103.4/sec 393.3 batch_insert_edges 10800 31849 34544 35132 435 35747 0.00% 28.8/sec 814.9
结论
- 顶点:平均响应时间为8959ms,处理时间过长。每个请求插入199条数据,平均每秒处理103个请求,则每秒平均总共处理的数据为199*131约等于2w条数据;
- 边:平均响应时间31849ms,处理时间过长。每个请求插入499个数据,平均每秒处理28个请求,则每秒平均总共处理的数据为28*499约等于13900条数据;
8.3 - HugeGraph-Loader Performance
使用场景
当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据
性能
测试均采用网址数据的边数据
RocksDB单机性能
- 关闭label index,22.8w edges/s
- 开启label index,15.3w edges/s
Cassandra集群性能
- 默认开启label index,6.3w edges/s
8.4 -
1 测试环境
1.1 硬件信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD
1.2 软件信息
1.2.1 测试用例
测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:
Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交
Single Insertion,单条插入,每个顶点或者每条边立即提交
Query,主要是图数据库的基本查询操作:
- Find Neighbors,查询所有顶点的邻居
- Find Adjacent Nodes,查询所有边的邻接顶点
- Find Shortest Path,查询第一个顶点到100个随机顶点的最短路径
Clustering,基于Louvain Method的社区发现算法
1.2.2 测试数据集
测试使用人造数据和真实数据
MIW、SIW和QW使用SNAP数据集
CW使用LFR-Benchmark generator生成的人造数据
本测试用到的数据集规模
名称 vertex数目 edge数目 文件大小 email-enron.txt 36,691 367,661 4MB com-youtube.ungraph.txt 1,157,806 2,987,624 38.7MB amazon0601.txt 403,393 3,387,388 47.9MB
1.3 服务配置
- HugeGraph版本:0.4.4,RestServer和Gremlin Server和backends都在同一台服务器上
- Cassandra版本:cassandra-3.10,commit-log 和data共用SSD
- RocksDB版本:rocksdbjni-5.8.6
- Titan版本:0.5.4, 使用thrift+Cassandra模式
graphdb-benchmark适配的Titan版本为0.5.4
2 测试结果
2.1 Batch插入性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) Titan 9.516 88.123 111.586 RocksDB 2.345 14.076 16.636 Cassandra 11.930 108.709 101.959 Memory 3.077 15.204 13.841
说明
- 表头"()“中数据是数据规模,以边为单位
- 表中数据是批量插入的时间,单位是s
- 例如,HugeGraph使用RocksDB插入amazon0601数据集的300w条边,花费14.076s,速度约为21w edges/s
结论
- RocksDB和Memory后端插入性能优于Cassandra
- HugeGraph和Titan同样使用Cassandra作为后端的情况下,插入性能接近
2.2 遍历性能
2.2.1 术语说明
- FN(Find Neighbor), 遍历所有vertex, 根据vertex查邻接edge, 通过edge和vertex查other vertex
- FA(Find Adjacent), 遍历所有edge,根据edge获得source vertex和target vertex
2.2.2 FN性能
Backend email-enron(3.6w) amazon0601(40w) com-youtube.ungraph(120w) Titan 7.724 70.935 128.884 RocksDB 8.876 65.852 63.388 Cassandra 13.125 126.959 102.580 Memory 22.309 207.411 165.609
说明
- 表头”()“中数据是数据规模,以顶点为单位
- 表中数据是遍历顶点花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有顶点,并查找邻接边和另一顶点,总共耗时65.852s
2.2.3 FA性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) Titan 7.119 63.353 115.633 RocksDB 6.032 64.526 52.721 Cassandra 9.410 102.766 94.197 Memory 12.340 195.444 140.89
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是遍历边花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有边,并查询每条边的两个顶点,总共耗时64.526s
结论
- HugeGraph RocksDB > Titan thrift+Cassandra > HugeGraph Cassandra > HugeGraph Memory
2.3 HugeGraph-图常用分析方法性能
术语说明
- FS(Find Shortest Path), 寻找最短路径
- K-neighbor,从起始vertex出发,通过K跳边能够到达的所有顶点, 包括1, 2, 3…(K-1), K跳边可达vertex
- K-out, 从起始vertex出发,恰好经过K跳out边能够到达的顶点
FS性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) Titan 11.333 0.313 376.06 RocksDB 44.391 2.221 268.792 Cassandra 39.845 3.337 331.113 Memory 35.638 2.059 388.987
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是找到从第一个顶点出发到达随机选择的100个顶点的最短路径的时间,单位是s
- 例如,HugeGraph使用RocksDB查找第一个顶点到100个随机顶点的最短路径,总共耗时2.059s
结论
- 在数据规模小或者顶点关联关系少的场景下,Titan最短路径性能优于HugeGraph
- 随着数据规模增大且顶点的关联度增高,HugeGraph最短路径性能优于Titan
K-neighbor性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.031s 0.033s 0.048s 0.500s 11.27s OOM v111 时间 0.027s 0.034s 0.115 1.36s OOM – v1111 时间 0.039s 0.027s 0.052s 0.511s 10.96s OOM
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
K-out性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.054s 0.057s 0.109s 0.526s 3.77s OOM 度 10 133 2453 50,830 1,128,688 v111 时间 0.032s 0.042s 0.136s 1.25s 20.62s OOM 度 10 211 4944 113150 2,629,970 v1111 时间 0.039s 0.045s 0.053s 1.10s 2.92s OOM 度 10 140 2555 50825 1,070,230
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
结论
- FS场景,HugeGraph性能优于Titan
- K-neighbor和K-out场景,HugeGraph能够实现在5度范围内秒级返回结果
2.4 图综合性能测试-CW
数据库 规模1000 规模5000 规模10000 规模20000 Titan 45.943 849.168 2737.117 9791.46 Memory(core) 41.077 1825.905 * * Cassandra(core) 39.783 862.744 2423.136 6564.191 RocksDB(core) 33.383 199.894 763.869 1677.813
说明
- “规模"以顶点为单位
- 表中数据是社区发现完成需要的时间,单位是s,例如HugeGraph使用RocksDB后端在规模10000的数据集,社区聚合不再变化,需要耗时763.869s
- “*“表示超过10000s未完成
- CW测试是CRUD的综合评估
- 后三者分别是HugeGraph的不同后端,该测试中HugeGraph跟Titan一样,没有通过client,直接对core操作
结论
- HugeGraph在使用Cassandra后端时,性能略优于Titan,随着数据规模的增大,优势越来越明显,数据规模20000时,比Titan快30%
- HugeGraph在使用RocksDB后端时,性能远高于Titan和HugeGraph的Cassandra后端,分别比两者快了6倍和4倍
9 - CHANGELOGS
9.1 - HugeGraph 1.0.0 Release Notes
OLTP API & Client 更新
API/Client 接口更新
- 支持热更新
trace
开关的 /exception/trace
API。 - 支持 Cypher 图查询语言 API。
- 支持通过 Swagger UI 接口来查看提供的 API 列表。
- 将各算法中 ’limit’ 参数的类型由 long 调整为 int。
- 支持在 Client 端跳过 Server 对 HBase 写入数据 (Beta)。
Core & Server
功能更新
- 支持 Java 11 版本。
- 支持 2 个新的 OLTP 算法: adamic-adar 和 resource-allocation。
- 支持 HBase 后端使用哈希 RowKey,并且允许预初始化 HBase 表。
- 支持 Cypher 图查询语言。
- 支持集群 Master 角色的自动管理与故障转移。
- 支持 16 个 OLAP 算法, 包括:LPA, Louvain, PageRank, BetweennessCentrality, RingsDetect等。
- 根据 Apache 基金会对项目的发版要求进行适配,包括 License 合规性、发版流程、代码风格等,支持 Apache 版本发布。
Bug 修复
- 修复无法根据多个 Label 和属性来查询边数据。
- 增加对环路检测算法的最大深度限制。
- 修复 tree() 语句返回结果异常问题。
- 修复批量更新边传入 Id 时的检查异常问题。
- 解决非预期的 Task 状态问题。
- 解决在更新顶点时未清除边缓存的问题。
- 修复 MySQL 后端执行 g.V() 时的错误。
- 修复因为 server-info 无法超时导致的问题。
- 导出了 ConditionP 类型用于 Gremlin 中用户使用。
- 修复 within + Text.contains 查询问题。
- 修复 addIndexLabel/removeIndexLabel 接口的竞争条件问题。
- 限制仅 Admin 允许输出图实例。
- 修复 Profile API 的检查问题。
- 修复在 count().is(0) 查询中 Empty Graph 的问题。
- 修复在异常时无法关闭服务的问题。
- 修复在 Apple M1 系统上的 JNA 报错 UnsatisfiedLinkError 的问题。
- 修复启动 RpcServer 时报 NPE 的问题。
- 修复 ACTION_CLEARED 参数数量的问题。
- 修复 RpcServer 服务启动问题。
- 修复用户传入参数可能得数字转换隐患问题。
- 移除了 Word 分词器依赖。
- 修复 Cassandra 与 MySQL 后端在异常时未优雅关闭迭代器的问题。
配置项更新
- 将配置项
raft.endpoint
从 Graph 作用域移动到 Server 作用域中。
其它修改
- refact(core): enhance schema job module.
- refact(raft): improve raft module & test & install snapshot and add peer.
- refact(core): remove early cycle detection & limit max depth.
- cache: fix assert node.next==empty.
- fix apache license conflicts: jnr-posix and jboss-logging.
- chore: add logo in README & remove outdated log4j version.
- refact(core): improve CachedGraphTransaction perf.
- chore: update CI config & support ci robot & add codeQL SEC-check & graph option.
- refact: ignore security check api & fix some bugs & clean code.
- doc: enhance CONTRIBUTING.md & README.md.
- refact: add checkstyle plugin & clean/format the code.
- refact(core): improve decode string empty bytes & avoid array-construct columns in BackendEntry.
- refact(cassandra): translate ipv4 to ipv6 metrics & update cassandra dependency version.
- chore: use .asf.yaml for apache workflow & replace APPLICATION_JSON with TEXT_PLAIN.
- feat: add system schema store.
- refact(rocksdb): update rocksdb version to 6.22 & improve rocksdb code.
- refact: update mysql scope to test & clean protobuf style/configs.
- chore: upgrade Dockerfile server to 0.12.0 & add editorconfig & improve ci.
- chore: upgrade grpc version.
- feat: support updateIfPresent/updateIfAbsent operation.
- chore: modify abnormal logs & upgrade netty-all to 4.1.44.
- refact: upgrade dependencies & adopt new analyzer & clean code.
- chore: improve .gitignore & update ci configs & add RAT/flatten plugin.
- chore(license): add dependencies-check ci & 3rd-party dependency licenses.
- refact: Shutdown log when shutdown process & fix tx leak & enhance the file path.
- refact: rename package to apache & dependency in all modules (Breaking Change).
- chore: add license checker & update antrun plugin & fix building problem in windows.
- feat: support one-step script for apache release v1.0.0 release.
Computer (OLAP)
Algorithm Changes
- 支持 PageRank 算法。
- 支持 WCC 算法。
- 支持 degree centrality 算法。
- 支持 triangle count 算法。
- 支持 rings detection 算法。
- 支持 LPA 算法。
- 支持 k-core 算法。
- 支持 closeness centrality 算法。
- 支持 betweenness centrality 算法。
- 支持 cluster coefficient 算法。
Platform Changes
- feat: init module computer-core & computer-algorithm & etcd dependency.
- feat: add Id as base type of vertex id.
- feat: init Vertex/Edge/Properties & JsonStructGraphOutput.
- feat: load data from hugegraph server.
- feat: init basic combiner, Bsp4Worker, Bsp4Master.
- feat: init sort & transport interface & basic FileInput/Output Stream.
- feat: init computation & ComputerOutput/Driver interface.
- feat: init Partitioner and HashPartitioner
- feat: init Master/WorkerService module.
- feat: init Heap/LoserTree sorting.
- feat: init rpc module.
- feat: init transport server, client, en/decode, flowControl, heartbeat.
- feat: init DataDirManager & PointerCombiner.
- feat: init aggregator module & add copy() and assign() methods to Value class.
- feat: add startAsync and finishAsync on client side, add onStarted and onFinished on server side.
- feat: init store/sort module.
- feat: link managers in worker sending end.
- feat: implement data receiver of worker.
- feat: implement StreamGraphInput and EntryInput.
- feat: add Sender and Receiver to process compute message.
- feat: add seqfile fromat.
- feat: add ComputeManager.
- feat: add computer-k8s and computer-k8s-operator.
- feat: add startup and make docker image code.
- feat: sort different type of message use different combiner.
- feat: add HDFS output format.
- feat: mount config-map and secret to container.
- feat: support java11.
- feat: support partition concurrent compute.
- refact: abstract computer-api from computer-core.
- refact: optimize data receiving.
- fix: release file descriptor after input and compute.
- doc: add operator deploy readme.
- feat: prepare for Apache release.
Toolchain (loader, tools, hubble)
- 支持 Loader 使用 SQL 格式来选取从关系数据库导入哪些数据。
- 支持 Loader 从 Spark 导入数据(包括 JDBC 方式)。
- 支持 Loader 增加 Flink-CDC 模式。
- 解决 Loader 导入 ORC 格式数据时,报错 NPE。
- 解决 Loader 在 Spark/Flink 模式时未缓存 Schema 的问题。
- 解决 Loader 的 Json 反序列化问题。
- 解决 Loader 的 Jackson 版本冲突与依赖问题。
- 支持 Hubble 高级算法接口的 UI 界面。
- 支持 Hubble 中 Gremlin 语句的高亮格式显示.
- 支持 Hubble 使用 Docker 镜像部署。
- 支持 输出构建日志。
- 解决 Hubble 的端口输入框问题。
- 支持 Apache 项目发版的适配。
Commons (common,rpc)
- 支持 assert-throws 方法返回 Future。
- 增加 Cnm 与 Anm 方法到 CollectionUtil 中。
- 支持 用户自定义的 content-type。
- 支持 Apache 项目发版的适配。
Release Details
更加详细的版本变更信息,可以查看各个子仓库的链接:
9.2 - HugeGraph 0.11 Release Notes
API & Client
功能更新
- 支持梭形相似度算法(hugegraph #671,hugegraph-client #62)
- 支持创建 Schema 时,记录创建的时间(hugegraph #746,hugegraph-client #69)
- 支持 RESTful API 中基于属性的范围查询顶点/边(hugegraph #782,hugegraph-client #73)
- 支持顶点和边的 TTL (hugegraph #794,hugegraph-client #83)
- 统一 RESTful API Server 和 Gremlin Server 的日期格式为字符串(hugegraph #1014,hugegraph-client #82)
- 支持共同邻居,Jaccard 相似度,全部最短路径,带权最短路径和单源最短路径5种遍历算法(hugegraph #936,hugegraph-client #80)
- 支持用户认证和细粒度权限控制(hugegraph #749,hugegraph #985,hugegraph-client #81)
- 支持遍历 API 的顶点计数功能(hugegraph #995,hugegraph-client #84)
- 支持 HTTPS 协议(hugegrap #1036,hugegraph-client #85)
- 支持创建索引时控制是否重建索引(hugegraph #1106,hugegraph-client #91)
- 支持定制的 kout/kneighbor,多点最短路径,最相似 Jaccard 点和模板路径5种遍历算法(hugegraph #1174,hugegraph-client #100,hugegraph-client #106)
内部修改
- 启动 HugeGraphServer 出现异常时快速失败(hugegraph #748)
- 定义 LOADING 模式来加速导入(hugegraph-client #101)
Core
功能更新
- 支持多属性顶点/边的分页查询(hugegraph #759)
- 支持聚合运算的性能优化(hugegraph #813)
- 支持堆外缓存(hugegraph #846)
- 支持属性权限管理(hugegraph #971)
- 支持 MySQL 和 Memory 后端分片,并改进 HBase 分片方法(hugegraph #974)
- 支持基于 Raft 的分布式一致性协议(hugegraph #1020)
- 支持元数据拷贝功能(hugegraph #1024)
- 支持集群的异步任务调度功能(hugegraph #1030)
- 支持发生 OOM 时打印堆信息功能(hugegraph #1093)
- 支持 Raft 状态机更新缓存(hugegraph #1119)
- 支持 Raft 节点管理功能(hugegraph #1137)
- 支持限制查询请求速率的功能(hugegraph #1158)
- 支持顶点/边的属性默认值功能(hugegraph #1182)
- 支持插件化查询加速机制 RamTable(hugegraph #1183)
- 支持索引重建失败时设置为 INVALID 状态(hugegraph #1226)
- 支持 HBase 启用 Kerberos 认证(hugegraph #1234)
BUG修复
- 修复配置权限时 start-hugegraph.sh 的超时问题(hugegraph #761)
- 修复在 studio 执行 gremlin 时的 MySQL 连接失败问题(hugegraph #765)
- 修复 HBase 后端 truncate 时出现的 TableNotFoundException(hugegraph #771)
- 修复限速配置项值未检查的问题(hugegraph #773)
- 修复唯一索引(Unique Index)的返回的异常信息不准确问题(hugegraph #797)
- 修复 RocksDB 后端执行 g.V().hasLabel().count() 时 OOM 问题 (hugegraph-798)
- 修复 traverseByLabel() 分页设置错误问题(hugegraph #805)
- 修复根据 ID 和 SortKeys 更新边属性时误创建边的问题(hugegraph #819)
- 修复部分存储后端的覆盖写问题(hugegraph #820)
- 修复保存执行失败的异步任务时无法取消的问题(hugegraph #827)
- 修复 MySQL 后端在 SSL 模式下无法打开数据库的问题(hugegraph #842)
- 修复索引查询时 offset 无效问题(hugegraph #866)
- 修复 Gremlin 中绝对路径泄露的安全问题(hugegraph #871)
- 修复 reconnectIfNeeded() 方法的 NPE 问题(hugegraph #874)
- 修复 PostgreSQL 的 JDBC_URL 配置没有"/“前缀的问题(hugegraph #891)
- 修复 RocksDB 内存统计问题(hugegraph #937)
- 修复环路检测的两点成环无法检测的问题(hugegraph #939)
- 修复梭形算法计算结束后没有清理计数的问题(hugegraph #947)
- 修复 gremlin-console 无法工作的问题(hugegraph #1027)
- 修复限制数目的按条件过滤邻接边问题(hugegraph #1057)
- 修复 MySQL 执行 SQL 时的 auto-commit 问题(hugegraph #1064)
- 修复通过两个索引查询时发生超时 80w 限制的问题(hugegraph #1088)
- 修复范围索引检查规则错误(hugegraph #1090)
- 修复删除残留索引的错误(hugegraph #1101)
- 修复当前线程为 task-worker 时关闭事务卡住的问题(hugegraph #1111)
- 修复最短路径查询出现 NoSuchElementException 的问题(hugegraph #1116)
- 修复异步任务有时提交两次的问题(hugegraph #1130)
- 修复值很小的 date 反序列化的问题(hugegraph #1152)
- 修复遍历算法未检查起点或者终点是否存在的问题(hugegraph #1156)
- 修复 bin/start-hugegraph.sh 参数解析错误的问题(hugegraph #1178)
- 修复 gremlin-console 运行时的 log4j 错误信息的问题(hugegraph #1229)
内部修改
- 延迟检查非空属性(hugegraph #756)
- 为存储后端增加查看集群节点信息的功能 (hugegraph #821)
- 为 RocksDB 后端增加 compaction 高级配置项(hugegraph #825)
- 增加 vertex.check_adjacent_vertex_exist 配置项(hugegraph #837)
- 检查主键属性不允许为空(hugegraph #847)
- 增加图名字的合法性检查(hugegraph #854)
- 增加对非预期的 SysProp 的查询(hugegraph #862)
- 使用 disableTableAsync 加速 HBase 后端的数据清除(hugegraph #868)
- 允许 Gremlin 环境触发系统异步任务(hugegraph #892)
- 编码字符类型索引中的类型 ID(hugegraph #894)
- 安全模块允许 Cassandra 在执行 CQL 时按需创建线程(hugegraph #896)
- 将 GremlinServer 的默认通道设置为 WsAndHttpChannelizer(hugegraph #903)
- 将 Direction 和遍历算法的类导出到 Gremlin 环境(hugegraph #904)
- 增加顶点属性缓存限制(hugegraph #941,hugegraph #942)
- 优化列表属性的读(hugegraph #943)
- 增加缓存的 L1 和 L2 配置(hugegraph #945)
- 优化 EdgeId.asString() 方法(hugegraph #946)
- 优化当顶点没有属性时跳过后端存储查询(hugegraph #951)
- 创建名字相同但属性不同的元数据时抛出 ExistedException(hugegraph #1009)
- 查询顶点和边后按需关闭事务(hugegraph #1039)
- 当图关闭时清空缓存(hugegraph #1078)
- 关闭图时加锁避免竞争问题(hugegraph #1104)
- 优化顶点和边的删除效率,当提供 Label+ID 删除时免去查询(hugegraph #1150)
- 使用 IntObjectMap 优化元数据缓存效率(hugegraph #1185)
- 使用单个 Raft 节点管理目前的三个 store(hugegraph #1187)
- 在重建索引时提前释放索引删除的锁(hugegraph #1193)
- 在压缩和解压缩异步任务的结果时,使用 LZ4 替代 Gzip(hugegraph #1198)
- 实现 RocksDB 删除 CF 操作的排他性来避免竞争(hugegraph #1202)
- 修改 CSV reporter 的输出目录,并默认设置为不输出(hugegraph #1233)
其它
- cherry-pick 0.10.4 版本的 bug 修复代码(hugegraph #785,hugegraph #1047)
- Jackson 升级到 2.10.2 版本(hugegraph #859)
- Thanks 信息中增加对 Titan 的感谢(hugegraph #906)
- 适配 TinkerPop 测试(hugegraph #1048)
- 修改允许输出的日志最低等级为 TRACE(hugegraph #1050)
- 增加 IDEA 的格式配置文件(hugegraph #1060)
- 修复 Travis CI 太多错误信息的问题(hugegraph #1098)
Loader
功能更新
- 支持读取 Hadoop 配置文件(hugegraph-loader #105)
- 支持指定 Date 属性的时区(hugegraph-loader #107)
- 支持从 ORC 压缩文件导入数据(hugegraph-loader #113)
- 支持单条边插入时设置是否检查顶点(hugegraph-loader #117)
- 支持从 Snappy-raw 压缩文件导入数据(hugegraph-loader #119)
- 支持导入映射文件 2.0 版本(hugegraph-loader #121)
- 增加一个将 utf8-bom 转换为 utf8 的命令行工具(hugegraph-loader #128)
- 支持导入任务开始前清理元数据信息的功能(hugegraph-loader #140)
- 支持 id 列作为属性存储(hugegraph-loader #143)
- 支持导入任务配置 username(hugegraph-loader #146)
- 支持从 Parquet 文件导入数据(hugegraph-loader #153)
- 支持指定读取文件的最大行数(hugegraph-loader #159)
- 支持 HTTPS 协议(hugegraph-loader #161)
- 支持时间戳作为日期格式(hugegraph-loader #164)
BUG修复
- 修复行的 retainAll() 方法没有修改 names 和 values 数组(hugegraph-loader #110)
- 修复 JSON 文件重新加载时的 NPE 问题(hugegraph-loader #112)
内部修改
- 只打印一次插入错误信息,以避免过多的错误信息(hugegraph-loader #118)
- 拆分批量插入和单条插入的线程(hugegraph-loader #120)
- CSV 的解析器改为 SimpleFlatMapper(hugegraph-loader #124)
- 编码主键中的数字和日期字段(hugegraph-loader #136)
- 确保主键列合法或者存在映射(hugegraph-loader #141)
- 跳过主键属性全部为空的顶点(hugegraph-loader #166)
- 在导入任务开始前设置为 LOADING 模式,并在导入完成后恢复原来模式(hugegraph-loader #169)
- 改进停止导入任务的实现(hugegraph-loader #170)
Tools
功能更新
- 支持 Memory 后端的备份功能 (hugegraph-tools #53)
- 支持 HTTPS 协议(hugegraph-tools #58)
- 支持 migrate 子命令配置用户名和密码(hugegraph-tools #61)
- 支持备份顶点和边时指定类型和过滤属性信息(hugegraph-tools #63)
BUG修复
- 修复 dump 命令的 NPE 问题(hugegraph-tools #49)
内部修改
- 在 backup/dump 之前清除分片文件(hugegraph-tools #53)
- 改进 HugeGraph-tools 的报错信息(hugegraph-tools #67)
- 改进 migrate 子命令,删除掉不支持的子配置(hugegraph-tools #68)
9.3 - HugeGraph 0.12 Release Notes
API & Client
接口更新
- 支持 https + auth 模式连接图服务 (hugegraph-client #109 #110)
- 统一 kout/kneighbor 等 OLTP 接口的参数命名及默认值(hugegraph-client #122 #123)
- 支持 RESTful 接口利用 P.textcontains() 进行属性全文检索(hugegraph #1312)
- 增加 graph_read_mode API 接口,以切换 OLTP、OLAP 读模式(hugegraph #1332)
- 支持 list/set 类型的聚合属性 aggregate property(hugegraph #1332)
- 权限接口增加 METRICS 资源类型(hugegraph #1355、hugegraph-client #114)
- 权限接口增加 SCHEMA 资源类型(hugegraph #1362、hugegraph-client #117)
- 增加手动 compact API 接口,支持 rocksdb/cassandra/hbase 后端(hugegraph #1378)
- 权限接口增加 login/logout API,支持颁发或回收 Token(hugegraph #1500、hugegraph-client #125)
- 权限接口增加 project API(hugegraph #1504、hugegraph-client #127)
- 增加 OLAP 回写接口,支持 cassandra/rocksdb 后端(hugegraph #1506、hugegraph-client #129)
- 增加返回一个图的所有 Schema 的 API 接口(hugegraph #1567、hugegraph-client #134)
- 变更 property key 创建与更新 API 的 HTTP 返回码为 202(hugegraph #1584)
- 增强 Text.contains() 支持3种格式:“word”、"(word)"、"(word1|word2|word3)"(hugegraph #1652)
- 统一了属性中特殊字符的行为(hugegraph #1670 #1684)
- 支持动态创建图实例、克隆图实例、删除图实例(hugegraph-client #135)
其它修改
- 修复在恢复 index label 时 IndexLabelV56 id 丢失的问题(hugegraph-client #118)
- 为 Edge 类增加 name() 方法(hugegraph-client #121)
Core & Server
功能更新
- 支持动态创建图实例(hugegraph #1065)
- 支持通过 Gremlin 调用 OLTP 算法(hugegraph #1289)
- 支持多集群使用同一个图权限服务,以共享权限信息(hugegraph #1350)
- 支持跨多节点的 Cache 缓存同步(hugegraph #1357)
- 支持 OLTP 算法使用原生集合以降低 GC 压力提升性能(hugegraph #1409)
- 支持对新增的 Raft 节点打快照或恢复快照(hugegraph #1439)
- 支持对集合属性建立二级索引 Secondary Index(hugegraph #1474)
- 支持审计日志,及其压缩、限速等功能(hugegraph #1492 #1493)
- 支持 OLTP 算法使用高性能并行无锁原生集合以提升性能(hugegraph #1552)
BUG修复
- 修复带权最短路径算法(weighted shortest path)NPE问题 (hugegraph #1250)
- 增加 Raft 相关的安全操作白名单(hugegraph #1257)
- 修复 RocksDB 实例未正确关闭的问题(hugegraph #1264)
- 在清空数据 truncate 操作之后,显示的发起写快照 Raft Snapshot(hugegraph #1275)
- 修复 Raft Leader 在收到 Follower 转发请求时未更新缓存的问题(hugegraph #1279)
- 修复带权最短路径算法(weighted shortest path)结果不稳定的问题(hugegraph #1280)
- 修复 rays 算法 limit 参数不生效问题(hugegraph #1284)
- 修复 neighborrank 算法 capacity 参数未检查的问题(hugegraph #1290)
- 修复 PostgreSQL 因为不存在与用户同名的数据库而初始化失败的问题(hugegraph #1293)
- 修复 HBase 后端当启用 Kerberos 时初始化失败的问题(hugegraph #1294)
- 修复 HBase/RocksDB 后端 shard 结束判断错误问题(hugegraph #1306)
- 修复带权最短路径算法(weighted shortest path)未检查目标顶点存在的问题(hugegraph #1307)
- 修复 personalrank/neighborrank 算法中非 String 类型 id 的问题(hugegraph #1310)
- 检查必须是 master 节点才允许调度 gremlin job(hugegraph #1314)
- 修复 g.V().hasLabel().limit(n) 因为索引覆盖导致的部分结果不准确问题(hugegraph #1316)
- 修复 jaccardsimilarity 算法当并集为空时报 NaN 错误的问题(hugegraph #1324)
- 修复 Raft Follower 节点操作 Schema 多节点之间数据不同步问题(hugegraph #1325)
- 修复因为 tx 未关闭导致的 TTL 不生效问题(hugegraph #1330)
- 修复 gremlin job 的执行结果大于 Cassandra 限制但小于任务限制时的异常处理(hugegraph #1334)
- 检查权限接口 auth-delete 和 role-get API 操作时图必须存在(hugegraph #1338)
- 修复异步任务结果中包含 path/tree 时系列化不正常的问题(hugegraph #1351)
- 修复初始化 admin 用户时的 NPE 问题(hugegraph #1360)
- 修复异步任务原子性操作问题,确保 update/get fields 及 re-schedule 的原子性(hugegraph #1361)
- 修复权限 NONE 资源类型的问题(hugegraph #1362)
- 修复启用权限后,truncate 操作报错 SecurityException 及管理员信息丢失问题(hugegraph #1365)
- 修复启用权限后,解析数据忽略了权限异常的问题(hugegraph #1380)
- 修复 AuthManager 在初始化时会尝试连接其它节点的问题(hugegraph #1381)
- 修复特定的 shard 信息导致 base64 解码错误的问题(hugegraph #1383)
- 修复启用权限后,使用 consistent-hash LB 在校验权限时,creator 为空的问题(hugegraph #1385)
- 改进权限中 VAR 资源不再依赖于 VERTEX 资源(hugegraph #1386)
- 规范启用权限后,Schema 操作仅依赖具体的资源(hugegraph #1387)
- 规范启用权限后,部分操作由依赖 STATUS 资源改为依赖 ANY 资源(hugegraph #1391)
- 规范启用权限后,禁止初始化管理员密码为空(hugegraph #1400)
- 检查创建用户时 username/password 不允许为空(hugegraph #1402)
- 修复更新 Label 时,PrimaryKey 或 SortKey 被设置为可空属性的问题(hugegraph #1406)
- 修复 ScyllaDB 丢失分页结果问题(hugegraph #1407)
- 修复带权最短路径算法(weighted shortest path)权重属性强制转换为 double 的问题(hugegraph #1432)
- 统一 OLTP 算法中的 degree 参数命名(hugegraph #1433)
- 修复 fusiformsimilarity 算法当 similars 为空的时候返回所有的顶点问题(hugegraph #1434)
- 改进 paths 算法,当起始点与目标点相同时应该返回空路径(hugegraph #1435)
- 修改 kout/kneighbor 的 limit 参数默认值 10 为 10000000(hugegraph #1436)
- 修复分页信息中的 ‘+’ 被 URL 编码为空格的问题(hugegraph #1437)
- 改进边更新接口的错误提示信息(hugegraph #1443)
- 修复 kout 算法 degree 未在所有 label 范围生效的问题(hugegraph #1459)
- 改进 kneighbor/kout 算法,起始点不允许出现在结果集中(hugegraph #1459 #1463)
- 统一 kout/kneighbor 的 Get 和 Post 版本行为(hugegraph #1470)
- 改进创建边时顶点类型不匹配的错误提示信息(hugegraph #1477)
- 修复 Range Index 的残留索引问题(hugegraph #1498)
- 修复权限操作未失效缓存的问题(hugegraph #1528)
- 修复 sameneighbor 的 limit 参数默认值 10 为 10000000(hugegraph #1530)
- 修复 clear API 不应该所有后端都调用 create snapshot 的问题(hugegraph #1532)
- 修复当 loading 模式时创建 Index Label 阻塞问题(hugegraph #1548)
- 修复增加图到 project 或从 project 移除图的问题(hugegraph #1562)
- 改进权限操作的一些错误提示信息(hugegraph #1563)
- 支持浮点属性设置为 Infinity/NaN 的值(hugegraph #1578)
- 修复 Raft 启用 safe_read 时的 quorum read 问题(hugegraph #1618)
- 修复 token 过期时间配置的单位问题(hugegraph #1625)
- 修复 MySQL Statement 资源泄露问题(hugegraph #1627)
- 修复竞争条件下 Schema.getIndexLabel 获取不到数据的问题(hugegraph #1629)
- 修复 HugeVertex4Insert 无法系列化问题(hugegraph #1630)
- 修复 MySQL count Statement 未关闭问题(hugegraph #1640)
- 修复当删除 Index Label 异常时,导致状态不同步问题(hugegraph #1642)
- 修复 MySQL 执行 gremlin timeout 导致的 statement 未关闭问题(hugegraph #1643)
- 改进 Search Index 以兼容特殊 Unicode 字符:\u0000 to \u0003(hugegraph #1659)
- 修复 #1659 引入的 Char 未转化为 String 的问题(hugegraph #1664)
- 修复 has() + within() 查询时结果异常问题(hugegraph #1680)
- 升级 Log4j 版本到 2.17 以修复安全漏洞(hugegraph #1686 #1698 #1702)
- 修复 HBase 后端 shard scan 中 startkey 包含空串时 NPE 问题(hugegraph #1691)
- 修复 paths 算法在深层环路遍历时性能下降问题 (hugegraph #1694)
- 改进 personalrank 算法的参数默认值及错误检查(hugegraph #1695)
- 修复 RESTful 接口 P.within 条件不生效问题(hugegraph #1704)
- 修复启用权限时无法动态创建图的问题(hugegraph #1708)
配置项修改:
- 共享 SSL 相关配置项命名(hugegraph #1260)
- 支持 RocksDB 配置项 rocksdb.level_compaction_dynamic_level_bytes(hugegraph #1262)
- 去除 RESFful Server 服务协议配置项 restserver.protocol,自动提取 URL 中的 Schema(hugegraph #1272)
- 增加 PostgreSQL 配置项 jdbc.postgresql.connect_database(hugegraph #1293)
- 增加针对顶点主键是否编码的配置项 vertex.encode_primary_key_number(hugegraph #1323)
- 增加针对聚合查询是否启用索引优化的配置项 query.optimize_aggregate_by_index(hugegraph #1549)
- 修改 cache_type 的默认值 l1 为 l2(hugegraph #1681)
- 增加 JDBC 强制重连配置项 jdbc.forced_auto_reconnect(hugegraph #1710)
其它修改
- 增加默认的 SSL Certificate 文件(hugegraph #1254)
- OLTP 并行请求共享线程池,而非每个请求使用单独的线程池(hugegraph #1258)
- 修复 Example 的问题(hugegraph #1308)
- 使用 jraft 版本 1.3.5(hugegraph #1313)
- 如果启用了 Raft 模式时,关闭 RocksDB 的 WAL(hugegraph #1318)
- 使用 TarLz4Util 来提升快照 Snapshot 压缩的性能(hugegraph #1336)
- 升级存储的版本号(store version),因为 property key 增加了 read frequency(hugegraph #1341)
- 顶点/边 vertex/edge 的 Get API 使用 queryVertex/queryEdge 方法来替代 iterator 方法(hugegraph #1345)
- 支持 BFS 优化的多度查询(hugegraph #1359)
- 改进 RocksDB deleteRange() 带来的查询性能问题(hugegraph #1375)
- 修复 travis-ci cannot find symbol Namifiable 问题(hugegraph #1376)
- 确保 RocksDB 快照的磁盘与 data path 指定的一致(hugegraph #1392)
- 修复 MacOS 空闲内存 free_memory 计算不准确问题(hugegraph #1396)
- 增加 Raft onBusy 回调来配合限速(hugegraph #1401)
- 升级 netty-all 版本 4.1.13.Final 到 4.1.42.Final(hugegraph #1403)
- 支持 TaskScheduler 暂停当设置为 loading 模式时(hugegraph #1414)
- 修复 raft-tools 脚本的问题(hugegraph #1416)
- 修复 license params 问题(hugegraph #1420)
- 提升写权限日志的性能,通过 batch flush & async write 方式改进(hugegraph #1448)
- 增加 MySQL 连接 URL 的日志记录(hugegraph #1451)
- 提升用户信息校验性能(hugegraph# 1460)
- 修复 TTL 因为起始时间问题导致的错误(hugegraph #1478)
- 支持日志配置的热加载及对审计日志的压缩(hugegraph #1492)
- 支持针对用户级别的审计日志的限速(hugegraph #1493)
- 缓存 RamCache 支持用户自定义的过期时间(hugegraph #1494)
- 在 auth client 端缓存 login role 以避免重复的 RPC 调用(hugegraph #1507)
- 修复 IdSet.contains() 未复写 AbstractCollection.contains() 问题(hugegraph #1511)
- 修复当 commitPartOfEdgeDeletions() 失败时,未回滚 rollback 的问题(hugegraph #1513)
- 提升 Cache metrics 性能(hugegraph #1515)
- 当发生 license 操作错误时,增加打印异常日志(hugegraph #1522)
- 改进 SimilarsMap 实现(hugegraph #1523)
- 使用 tokenless 方式来更新 coverage(hugegraph #1529)
- 改进 project update 接口的代码(hugegraph #1537)
- 允许从 option() 访问 GRAPH_STORE(hugegraph #1546)
- 优化 kout/kneighbor 的 count 查询以避免拷贝集合(hugegraph #1550)
- 优化 shortestpath 遍历方式,以数据量少的一端优先遍历(hugegraph #1569)
- 完善 rocksdb.data_disks 配置项的 allowed keys 提示信息(hugegraph #1585)
- 为 number id 优化 OLTP 遍历中的 id2code 方法性能(hugegraph #1623)
- 优化 HugeElement.getProperties() 返回 Collection<Property>(hugegraph #1624)
- 增加 APACHE PROPOSAL 文件(hugegraph #1644)
- 改进 close tx 的流程(hugegraph #1655)
- 当 reset() 时为 MySQL close 捕获所有类型异常(hugegraph #1661)
- 改进 OLAP property 模块代码(hugegraph #1675)
- 改进查询模块的执行性能(hugegraph #1711)
Loader
- 支持导入 Parquet 格式文件(hugegraph-loader #174)
- 支持 HDFS Kerberos 权限验证(hugegraph-loader #176)
- 支持 HTTPS 协议连接到服务端导入数据(hugegraph-loader #183)
- 修复 trust store file 路径问题(hugegraph-loader #186)
- 处理 loading mode 重置的异常(hugegraph-loader #187)
- 增加在插入数据时对非空属性的检查(hugegraph-loader #190)
- 修复客户端与服务端时区不同导致的时间判断问题(hugegraph-loader #192)
- 优化数据解析性能(hugegraph-loader #194)
- 当用户指定了文件头时,检查其必须不为空(hugegraph-loader #195)
- 修复示例程序中 MySQL struct.json 格式问题(hugegraph-loader #198)
- 修复顶点边导入速度不精确的问题(hugegraph-loader #200 #205)
- 当导入启用 check-vertex 时,确保先导入顶点再导入边(hugegraph-loader #206)
- 修复边 Json 数据导入格式不统一时数组溢出的问题(hugegraph-loader #211)
- 修复因边 mapping 文件不存在导致的 NPE 问题(hugegraph-loader #213)
- 修复读取时间可能出现负数的问题(hugegraph-loader #215)
- 改进目录文件的日志打印(hugegraph-loader #223)
- 改进 loader 的的 Schema 处理流程(hugegraph-loader #230)
Tools
- 支持 HTTPS 协议(hugegraph-tools #71)
- 移除 –protocol 参数,直接从URL中自动提取(hugegraph-tools #72)
- 支持将数据 dump 到 HDFS 文件系统(hugegraph-tools #73)
- 修复 trust store file 路径问题(hugegraph-tools #75)
- 支持权限信息的备份恢复(hugegraph-tools #76)
- 支持无参数的 Printer 打印(hugegraph-tools #79)
- 修复 MacOS free_memory 计算问题(hugegraph-tools #82)
- 支持备份恢复时指定线程数hugegraph-tools #83)
- 支持动态创建图、克隆图、删除图等命令(hugegraph-tools #95)
9.4 - HugeGraph 0.10 Release Notes
API & Client
功能更新
- 支持 HugeGraphServer 服务端内存紧张时返回错误拒绝请求 (hugegraph #476)
- 支持 API 白名单和 HugeGraphServer GC 频率控制功能 (hugegraph #522)
- 支持 Rings API 的 source_in_ring 参数 (hugegraph #528,hugegraph-client #48)
- 支持批量按策略更新属性接口 (hugegraph #493,hugegraph-client #46)
- 支持 Shard Index 前缀与范围检索索引 (hugegraph #574,hugegraph-client #56)
- 支持顶点的 UUID ID 类型 (hugegraph #618,hugegraph-client #59)
- 支持唯一性约束索引(Unique Index) (hugegraph #636,hugegraph-client #60)
- 支持 API 请求超时功能 (hugegraph #674)
- 支持根据名称列表查询 schema (hugegraph #686,hugegraph-client #63)
- 支持按分页方式获取异步任务 (hugegraph #720)
内部修改
- 保持 traverser 的参数与 server 端一致 (hugegraph-client #44)
- 支持在 Shard 内使用分页方式遍历顶点或者边的方法 (hugegraph-client #47)
- 支持 Gremlin 查询结果持有 GraphManager (hugegraph-client #49)
- 改进 RestClient 的连接参数 (hugegraph-client #52)
- 增加 Date 类型属性的测试 (hugegraph-client #55)
- 适配 HugeGremlinException 异常 (hugegraph-client #57)
- 增加新功能的版本匹配检查 (hugegraph-client #66)
- 适配 UUID 的序列化 (hugegraph-client #67)
Core
功能更新
- 支持 PostgreSQL 和 CockroachDB 存储后端 (hugegraph #484)
- 支持负数索引 (hugegraph #513)
- 支持边的 Vertex + SortKeys 的前缀范围查询 (hugegraph #574)
- 支持顶点的邻接边按分页方式查询 (hugegraph #659)
- 禁止通过 Gremlin 进行敏感操作 (hugegraph #176)
- 支持 Lic 校验功能 (hugegraph #645)
- 支持 Search Index 查询结果按匹配度排序的功能 (hugegraph #653)
- 升级 tinkerpop 至版本 3.4.3 (hugegraph #648)
BUG修复
- 修复按分页方式查询边时剩余数目(remaining count)错误 (hugegraph #515)
- 修复清空后端时边缓存未清空的问题 (hugegraph #488)
- 修复无法插入 List
类型的属性问题 (hugegraph #534) - 修复 PostgreSQL 后端的 existDatabase(), clearBackend() 和 rollback()功能 (hugegraph #531)
- 修复程序关闭时 HugeGraphServer 和 GremlinServer 残留问题 (hugegraph #554)
- 修复在 LockTable 中重复抓锁的问题 (hugegraph #566)
- 修复从 Edge 中获取的 Vertex 没有属性的问题 (hugegraph #604)
- 修复交叉关闭 RocksDB 的连接池问题 (hugegraph #598)
- 修复在超级点查询时 limit 失效问题 (hugegraph #607)
- 修复使用 Equal 条件和分页的情况下查询 Range Index 只返回第一页的问题 (hugegraph #614)
- 修复查询 limit 在删除部分数据后失效的问题 (hugegraph #610)
- 修复 Example1 的查询错误 (hugegraph #638)
- 修复 HBase 的批量提交部分错误问题 (hugegraph #634)
- 修复索引搜索时 compareNumber() 方法的空指针问题 (hugegraph #629)
- 修复更新属性值为已经删除的顶点或边的属性时失败问题 (hugegraph #679)
- 修复 system 类型残留索引无法清除问题 (hugegraph #675)
- 修复 HBase 在 Metrics 信息中的单位问题 (hugegraph #713)
- 修复存储后端未初始化问题 (hugegraph #708)
- 修复按 Label 删除边时导致的 IN 边残留问题 (hugegraph #727)
- 修复 init-store 会生成多份 backend_info 问题 (hugegraph #723)
内部修改
- 抑制因 PostgreSQL 后端 database 不存在时的报警信息 (hugegraph #527)
- 删除 PostgreSQL 后端的无用配置项 (hugegraph #533)
- 改进错误信息中的 HugeType 为易读字符串 (hugegraph #546)
- 增加 jdbc.storage_engine 配置项指定存储引擎 (hugegraph #555)
- 增加使用后端链接时按需重连功能 (hugegraph #562)
- 避免打印空的查询条件 (hugegraph #583)
- 缩减 Variable 的字符串长度 (hugegraph #581)
- 增加 RocksDB 后端的 cache 配置项 (hugegraph #567)
- 改进异步任务的异常信息 (hugegraph #596)
- 将 Range Index 拆分成 INT,LONG,FLOAT,DOUBLE 四个表存储 (hugegraph #574)
- 改进顶点和边 API 的 Metrics 名字 (hugegraph #631)
- 增加 G1GC 和 GC Log 的配置项 (hugegraph #616)
- 拆分顶点和边的 Label Index 表 (hugegraph #635)
- 减少顶点和边的属性存储空间 (hugegraph #650)
- 支持对 Secondary Index 和 Primary Key 中的数字进行编码 (hugegraph #676)
- 减少顶点和边的 ID 存储空间 (hugegraph #661)
- 支持 Cassandra 后端存储的二进制序列化存储 (hugegraph #680)
- 放松对最小内存的限制 (hugegraph #689)
- 修复 RocksDB 后端批量写时的 Invalid column family 问题 (hugegraph #701)
- 更新异步任务状态时删除残留索引 (hugegraph #719)
- 删除 ScyllaDB 的 Label Index 表 (hugegraph #717)
- 启动时使用多线程方式打开 RocksDB 后端存储多个数据目录 (hugegraph #721)
- RocksDB 版本从 v5.17.2 升级至 v6.3.6 (hugegraph #722)
其它
- 增加 API tests 到 codecov 统计中 (hugegraph #711)
- 改进配置文件的默认配置项 (hugegraph #575)
- 改进 README 中的致谢信息 (hugegraph #548)
Loader
功能更新
- 支持 JSON 数据源的 selected 字段 (hugegraph-loader #62)
- 支持定制化 List 元素之间的分隔符 (hugegraph-loader #66)
- 支持值映射 (hugegraph-loader #67)
- 支持通过文件后缀过滤文件 (hugegraph-loader #82)
- 支持对导入进度进行记录和断点续传 (hugegraph-loader #70,hugegraph-loader #87)
- 支持从不同的关系型数据库中读取 Header 信息 (hugegraph-loader #79)
- 支持属性为 Unsigned Long 类型值 (hugegraph-loader #91)
- 支持顶点的 UUID ID 类型 (hugegraph-loader #98)
- 支持按照策略批量更新属性 (hugegraph-loader #97)
BUG修复
- 修复 nullable key 在 mapping field 不工作的问题 (hugegraph-loader #64)
- 修复 Parse Exception 无法捕获的问题 (hugegraph-loader #74)
- 修复在等待异步任务完成时获取信号量数目错误的问题 (hugegraph-loader #86)
- 修复空表时 hasNext() 返回 true 的问题 (hugegraph-loader #90)
- 修复布尔值解析错误问题 (hugegraph-loader #92)
内部修改
- 增加 HTTP 连接参数 (hugegraph-loader #81)
- 改进导入完成的总结信息 (hugegraph-loader #80)
- 改进一行数据缺少列或者有多余列的处理逻辑 (hugegraph-loader #93)
Tools
功能更新
- 支持 0.8 版本 server 备份的数据恢复至 0.9 版本的 server 中 (hugegraph-tools #34)
- 增加 timeout 全局参数 (hugegraph-tools #44)
- 增加 migrate 子命令支持迁移图 (hugegraph-tools #45)
BUG修复
- 修复 dump 命令不支持 split size 参数的问题 (hugegraph-tools #32)
内部修改
- 删除 Hadoop 对 Jersey 1.19的依赖 (hugegraph-tools #31)
- 优化子命令在 help 信息中的排序 (hugegraph-tools #37)
- 使用 log4j2 清除 log4j 的警告信息 (hugegraph-tools #39)
9.5 - HugeGraph 0.9 Release Notes
API & Client
功能更新
- 增加 personal rank API 和 neighbor rank API (hugegraph #274)
- Shortest path API 增加 skip_degree 参数跳过超级点(hugegraph #433,hugegraph-client #42)
- vertex/edge 的 scan API 支持分页机制 (hugegraph #428,hugegraph-client #35)
- VertexAPI 使用简化的属性序列化器 (hugegraph #332,hugegraph-client #37)
- 增加 customized paths API 和 customized crosspoints API (hugegraph #306,hugegraph-client #40)
- 在 server 端所有线程忙时返回503错误 (hugegraph #343)
- 保持 API 的 depth 和 degree 参数一致 (hugegraph #252,hugegraph-client #30)
BUG修复
- 增加属性的时候验证 Date 而非 Timestamp 的值 (hugegraph-client #26)
内部修改
- RestClient 支持重用连接 (hugegraph-client #33)
- 使用 JsonUtil 替换冗余的 ObjectMapper (hugegraph-client #41)
- Edge 直接引用 Vertex 使得批量插入更友好 (hugegraph-client #29)
- 使用 JaCoCo 替换 Cobertura 统计代码覆盖率 (hugegraph-client #39)
- 改进 Shard 反序列化机制 (hugegraph-client #34)
Core
功能更新
- 支持 Cassandra 的 NetworkTopologyStrategy (hugegraph #448)
- 元数据删除和索引重建使用分页机制 (hugegraph #417)
- 支持将 HugeGraphServer 作为系统服务 (hugegraph #170)
- 单一索引查询支持分页机制 (hugegraph #328)
- 在初始化图库时支持定制化插件 (hugegraph #364)
- 为HBase后端增加 hbase.zookeeper.znode.parent 配置项 (hugegraph #333)
- 支持异步 Gremlin 任务的进度更新 (hugegraph #325)
- 使用异步任务的方式删除残留索引 (hugegraph #285)
- 支持按 sortKeys 范围查找功能 (hugegraph #271)
BUG修复
- 修复二级索引删除时 Cassandra 后端的 batch 超过65535限制的问题 (hugegraph #386)
- 修复 RocksDB 磁盘利用率的 metrics 不正确问题 (hugegraph #326)
- 修复异步索引删除错误修复 (hugegraph #336)
- 修复 BackendSessionPool.close() 的竞争条件问题 (hugegraph #330)
- 修复保留的系统 ID 不工作问题 (hugegraph #315)
- 修复 cache 的 metrics 信息丢失问题 (hugegraph #321)
- 修复使用 hasId() 按 id 查询顶点时不支持数字 id 问题 (hugegraph #302)
- 修复重建索引时的 80w 限制问题和 Cassandra 后端的 batch 65535问题 (hugegraph #292)
- 修复残留索引删除无法处理未展开(none-flatten)查询的问题 (hugegraph #281)
内部修改
- 迭代器变量统一命名为 ‘iter’(hugegraph #438)
- 增加 PageState.page() 方法统一获取分页信息接口 (hugegraph #429)
- 为基于 mapdb 的内存版后端调整代码结构,增加测试用例 (hugegraph #357)
- 支持代码覆盖率统计 (hugegraph #376)
- 设置 tx capacity 的下限为 COMMIT_BATCH(默认为500) (hugegraph #379)
- 增加 shutdown hook 来自动关闭线程池 (hugegraph #355)
- PerfExample 的统计时间排除环境初始化时间 (hugegraph #329)
- 改进 BinarySerializer 中的 schema 序列化 (hugegraph #316)
- 避免对 primary key 的属性创建多余的索引 (hugegraph #317)
- 限制 Gremlin 异步任务的名字小于256字节 (hugegraph #313)
- 使用 multi-get 优化 HBase 后端的按 id 查询 (hugegraph #279)
- 支持更多的日期数据类型 (hugegraph #274)
- 修改 Cassandra 和 HBase 的 port 范围为(1,65535) (hugegraph #263)
其它
- 增加 travis API 测试 (hugegraph #299)
- 删除 rest-server.properties 中的 GremlinServer 相关的默认配置项 (hugegraph #290)
Loader
功能更新
- 支持从 HDFS 和 关系型数据库导入数据 (hugegraph-loader #14)
- 支持传递权限 token 参数(hugegraph-loader #46)
- 支持通过 regex 指定要跳过的行 (hugegraph-loader #43)
- 支持导入 TEXT 文件时的 List/Set 属性(hugegraph-loader #38)
- 支持自定义的日期格式 (hugegraph-loader #28)
- 支持从指定目录导入数据 (hugegraph-loader #33)
- 支持忽略最后多余的列或者 null 值的列 (hugegraph-loader #23)
BUG修复
- 修复 Example 问题(hugegraph-loader #57)
- 修复当 vertex 是 customized ID 策略时边解析问题(hugegraph-loader #24)
内部修改
- URL regex 改进 (hugegraph-loader #47)
Tools
功能更新
- 支持海量数据备份和恢复到本地和 HDFS,并支持压缩 (hugegraph-tools #21)
- 支持异步任务取消和清理功能 (hugegraph-tools #20)
- 改进 graph-clear 命令的提示信息 (hugegraph-tools #23)
BUG修复
- 修复 restore 命令总是使用 ‘hugegraph’ 作为目标图的问题,支持指定图 (hugegraph-tools #26)
9.6 - HugeGraph 0.8 Release Notes
API & Client
功能更新
- 服务端增加 rays 和 rings 的 RESTful API(hugegraph #45)
- 使创建 IndexLabel 返回异步任务(hugegraph #95,hugegraph-client #9)
- 客户端增加恢复模式相关的 API(hugegraph-client #10)
- 让 task-list API 不返回 task_input 和 task_result(hugegraph #143)
- 增加取消异步任务的API(hugegraph #167,hugegraph-client #15)
- 增加获取后端 metrics 的 API(hugegraph #155)
BUG修复
- 分页获取时最后一页的 page 应该为 null 而非 “null”(hugegraph #168)
- 分页迭代获取服务端已经没有下一页了应该停止获取(hugegraph-client #16)
- 添加顶点使用自定义 Number Id 时报类型无法转换(hugegraph-client #21)
内部修改
- 增加持续集成测试(hugegraph-client #19)
Core
功能更新
- 取消异步任务通过 label 查询时 80w 的限制(hugegraph #93)
- 允许 cardinality 为 set 时传入 Json List 形式的属性值(hugegraph #109)
- 支持在恢复模式和合并模式来恢复图(hugegraph #114)
- RocksDB 后端支持多个图指定为同一个存储目录(hugegraph #123)
- 支持用户自定义权限认证器(hugegraph-loader #133)
- 当服务重启后重新开始未完成的任务(hugegraph #188)
- 当顶点的 Id 策略为自定义时,检查是否已存在相同 Id 的顶点(hugegraph #189)
BUG修复
- 增加对 HasContainer 的 predicate 不为 null 的检查(hugegraph #16)
- RocksDB 后端由于数据目录和日志目录错误导致 init-store 失败(hugegraph #25)
- 启动 hugegraph 时由于 logs 目录不存在导致提示超时但实际可访问(hugegraph #38)
- ScyllaDB 后端遗漏注册顶点表(hugegraph #47)
- 使用 hasLabel 查询传入多个 label 时失败(hugegraph #50)
- Memory 后端未初始化 task 相关的 schema(hugegraph #100)
- 当使用 hasLabel 查询时,如果元素数量超过 80w,即使加上 limit 也会报错(hugegraph #104)
- 任务的在运行之后没有保存过状态(hugegraph #113)
- 检查后端版本信息时直接强转 HugeGraphAuthProxy 为 HugeGraph(hugegraph #127)
- 配置项 batch.max_vertices_per_batch 未生效(hugegraph #130)
- 配置文件 rest-server.properties 有错误时 HugeGraphServer 启动不报错,但是无法访问(hugegraph #131)
- MySQL 后端某个线程的提交对其他线程不可见(hugegraph #163)
- 使用 union(branch) + has(date) 查询时提示 String 无法转换为 Date(hugegraph #181)
- 使用 RocksDB 后端带 limit 查询顶点时会返回不完整的结果(hugegraph #197)
- 提示其他线程无法操作 tx(hugegraph #204)
内部修改
- 拆分 graph.cache_xx 配置项为 vertex.cache_xx 和 edge.cache_xx 两类(hugegraph #56)
- 去除 hugegraph-dist 对 hugegraph-api 的依赖(hugegraph #61)
- 优化集合取交集和取差集的操作(hugegraph #85)
- 优化 transaction 的缓存处理和索引及 Id 查询(hugegraph #105)
- 给各线程池的线程命名(hugegraph #124)
- 增加并优化了一些 metrics 统计(hugegraph #138)
- 增加了对未完成任务的 metrics 记录(hugegraph #141)
- 让索引更新以分批方式提交,而不是全量提交(hugegraph #150)
- 在添加顶点/边时一直持有 schema 的读锁,直到提交/回滚完成(hugegraph #180)
- 加速 Tinkerpop 测试(hugegraph #19)
- 修复 Tinkerpop 测试在 resource 目录下找不到 filter 文件的 BUG(hugegraph #26)
- 开启 Tinkerpop 测试中 supportCustomIds 特性(hugegraph #69)
- 持续集成中添加 HBase 后端的测试(hugegraph #41)
- 避免持续集成的 deploy 脚本运行多次(hugegraph #170)
- 修复 cache 单元测试跑不过的问题(hugegraph #177)
- 持续集成中修改部分后端的存储为 tmpfs 以加快测试速度(hugegraph #206)
其它
- 增加 issue 模版(hugegraph #42)
- 增加 CONTRIBUTING 文件(hugegraph #59)
Loader
功能更新
- 支持忽略源文件某些特定列(hugegraph-loader #2)
- 支持导入 cardinality 为 Set 的属性数据(hugegraph-loader #10)
- 单条插入也使用多个线程执行,解决了错误多时最后单条导入慢的问题(hugegraph-loader #12)
BUG修复
- 导入过程可能统计出错(hugegraph-loader #4)
- 顶点使用自定义 Number Id 导入出错(hugegraph-loader #6)
- 顶点使用联合主键时导入出错(hugegraph-loader #18)
内部修改
- 增加持续集成测试(hugegraph-loader #8)
- 优化检测到文件不存在时的提示信息(hugegraph-loader #16)
Tools
功能更新
- 增加 KgDumper (hugegraph-tools #6)
- 支持在恢复模式和合并模式中恢复图(hugegraph-tools #9)
BUG修复
- 脚本中的工具函数 get_ip 在系统未安装 ifconfig 时报错(hugegraph-tools #13)
9.7 - HugeGraph 0.7 Release Notes
API & Java Client
功能更新
- 支持异步删除元数据和重建索引(HugeGraph-889)
- 加入监控API,并与Gremlin的监控框架集成(HugeGraph-1273)
BUG修复
- EdgeAPI更新属性时会将属性值也置为属性键(HugeGraph-81)
- 当删除顶点或边时,如果id非法应该返回400错误而非404(HugeGraph-1337)
Core
功能更新
- 支持HBase后端存储(HugeGraph-1280)
- 增加异步API框架,耗时操作可通过调用异步API实现(HugeGraph-387)
- 支持对长属性列建立二级索引,取消目前索引列长度256字节的限制(HugeGraph-1314)
- 支持顶点属性的“创建或更新”操作(HugeGraph-1303)
- 支持全文检索功能(HugeGraph-1322)
- 支持数据库表的版本号检查(HugeGraph-1328)
- 删除顶点时,如果遇到超级点的时候报错"Batch too large"或“Batch 65535 statements”(HugeGraph-1354)
- 支持异步删除元数据和重建索引(HugeGraph-889)
- 支持异步长时间执行Gremlin任务(HugeGraph-889)
BUG修复
- 防止超级点访问时查询过多下一层顶点而阻塞服务(HugeGraph-1302)
- HBase初始化时报错连接已经关闭(HugeGraph-1318)
- 按照date属性过滤顶点报错String无法转为Date(HugeGraph-1319)
- 残留索引删除,对range索引的判断存在错误(HugeGraph-1291)
- 支持组合索引后,残留索引清理没有考虑索引组合的情况(HugeGraph-1311)
- 根据otherV的条件来删除边时,可能会因为边的顶点不存在导致错误(HugeGraph-1347)
- label索引对offset和limit结果错误(HugeGraph-1329)
- vertex label或者edge label没有开启label index,删除label会导致数据无法删除(HugeGraph-1355)
内部修改
- hbase后端代码引入较新版本的Jackson-databind包,导致HugeGraphServer启动异常(HugeGraph-1306)
- Core和Client都自己持有一个shard类,而不是依赖于common模块(HugeGraph-1316)
- 去掉rebuild index和删除vertex label和edge label时的80w的capacity限制(HugeGraph-1297)
- 所有schema操作需要考虑同步问题(HugeGraph-1279)
- 拆分Cassandra的索引表,把element id每条一行,避免聚合高时,导入速度非常慢甚至卡住(HugeGraph-1304)
- 将hugegraph-test中关于common的测试用例移动到hugegraph-common中(HugeGraph-1297)
- 异步任务支持保存任务参数,以支持任务恢复(HugeGraph-1344)
- 支持通过脚本部署文档到GitHub(HugeGraph-1351)
- RocksDB和Hbase后端索引删除实现(HugeGraph-1317)
Loader
功能更新
- HugeLoader支持用户手动创建schema,以文件的方式传入(HugeGraph-1295)
BUG修复
- HugeLoader导数据时未区分输入文件的编码,导致可能产生乱码(HugeGraph-1288)
- HugeLoader打包的example目录的三个子目录下没有文件(HugeGraph-1288)
- 导入的CSV文件中如果数据列本身包含逗号会解析出错(HugeGraph-1320)
- 批量插入避免单条失败导致整个batch都无法插入(HugeGraph-1336)
- 异常信息作为模板打印异常(HugeGraph-1345)
- 导入边数据,当列数不对时导致程序退出(HugeGraph-1346)
- HugeLoader的自动创建schema失败(HugeGraph-1363)
- ID长度检查应该检查字节长度而非字符串长度(HugeGraph-1374)
内部修改
- 添加测试用例(HugeGraph-1361)
Tools
功能更新
- backup/restore使用多线程加速,并增加retry机制(HugeGraph-1307)
- 一键部署支持传入路径以存放包(HugeGraph-1325)
- 实现dump图功能(内存构建顶点及关联边)(HugeGraph-1339)
- 增加backup-scheduler功能,支持定时备份且保留一定数目最新备份(HugeGraph-1326)
- 增加异步任务查询和异步执行Gremlin的功能(HugeGraph-1357)
BUG修复
- hugegraph-tools的backup和restore编码为UTF-8(HugeGraph-1321)
- hugegraph-tools设置默认JVM堆大小和发布版本号(HugeGraph-1340)
Studio
BUG修复
- HugeStudio中顶点id包含换行符时g.V()会导致groovy解析出错(HugeGraph-1292)
- 限制返回的顶点及边的数量(HugeGraph-1333)
- 加载note出现消失或者卡住情况(HugeGraph-1353)
- HugeStudio打包时,编译失败但没有报错,导致发布包无法启动(HugeGraph-1368)
9.8 - HugeGraph 0.6 Release Notes
API & Java Client
功能更新
- 增加RESTFul API paths和crosspoints,找出source到target顶点间多条路径或包含交叉点的路径(HugeGraph-1210)
- 在API层添加批量插入并发数的控制,避免出现全部的线程都用于写而无法查询的情况(HugeGraph-1228)
- 增加scan-API,允许客户端并发地获取顶点和边(HugeGraph-1197)
- Client支持传入用户名密码访问带权限控制的HugeGraph(HugeGraph-1256)
- 为顶点及边的list API添加offset参数(HugeGraph-1261)
- RESTful API的顶点/边的list不允许同时传入page 和 [label,属性](HugeGraph-1262)
- k-out、K-neighbor、paths、shortestpath等API增加degree、capacity和limit(HugeGraph-1176)
- 增加restore status的set/get/clear接口(HugeGraph-1272)
BUG修复
- 使 RestClient的basic auth使用Preemptive模式(HugeGraph-1257)
- HugeGraph-Client中由ResultSet获取多次迭代器,除第一次外其他的无法迭代(HugeGraph-1278)
Core
功能更新
- RocksDB实现scan特性(HugeGraph-1198)
- Schema userdata 提供删除 key 功能(HugeGraph-1195)
- 支持date类型属性的范围查询(HugeGraph-1208)
- limit下沉到backend,尽可能不进行多余的索引读取(HugeGraph-1234)
- 增加 API 权限与访问控制(HugeGraph-1162)
- 禁止多个后端配置store为相同的值(HugeGraph-1269)
BUG修复
- RocksDB的Range查询时如果只指定上界或下界会查出其他IndexLabel的记录(HugeGraph-1211)
- RocksDB带limit查询时,graphTransaction查询返回的结果多一个(HugeGraph-1234)
- init-store在CentOS上依赖通用的io.netty有时会卡住,改为使用netty-transport-native-epoll(HugeGraph-1255)
- Cassandra后端in语句(按id查询)元素个数最大65535(HugeGraph-1239)
- 主键加索引(或普通属性)作为查询条件时报错(HugeGraph-1276)
- init-store.sh在Centos平台上初始化失败或者卡住(HugeGraph-1255)
测试
无
内部修改
- 将compareNumber方法搬移至common模块(HugeGraph-1208)
- 修复HugeGraphServer无法在Ubuntu机器上启动的Bug(HugeGraph-1154)
- 修复init-store.sh无法在bin目录下执行的BUG(HugeGraph-1223)
- 修复HugeGraphServer启动过程中无法通过CTRL+C终止的BUG(HugeGraph-1223)
- HugeGraphServer启动前检查端口是否被占用(HugeGraph-1223)
- HugeGraphServer启动前检查系统JDK是否安装以及版本是否为1.8(HugeGraph-1223)
- 给HugeConfig类增加getMap()方法(HugeGraph-1236)
- 修改默认配置项,后端使用RocksDB,注释重要的配置项(HugeGraph-1240)
- 重命名userData为userdata(HugeGraph-1249)
- centos 4.3系统HugeGraphServer进程使用jps命令查不到
- 增加配置项ALLOW_TRACE,允许设置是否返回exception stack trace(HugeGraph-81)
Tools
功能更新
- 增加自动化部署工具以安装所有组件(HugeGraph-1267)
- 增加clear的脚本,并拆分deploy和start-all(HugeGraph-1274)
- 对hugegraph服务进行监控以提高可用性(HugeGraph-1266)
- 增加backup/restore功能和命令(HugeGraph-1272)
- 增加graphs API对应的命令(HugeGraph-1272)
BUG修复
Loader
功能更新
- 默认添加csv及json的示例(HugeGraph-1259)
BUG修复
9.9 - HugeGraph 0.5 Release Notes
API & Java Client
功能更新
- VertexLabel与EdgeLabel增加bool参数enable_label_index表述是否构建label索引(HugeGraph-1085)
- 增加RESTful API来支持高效shortest path,K-out和K-neighbor查询(HugeGraph-944)
- 增加RESTful API支持按id列表批量查询顶点(HugeGraph-1153)
- 支持迭代获取全部的顶点和边,使用分页实现(HugeGraph-1166)
- 顶点id中包含 / % 等 URL 保留字符时通过 VertexAPI 查不出来(HugeGraph-1127)
- 批量插入边时是否检查vertex的RESTful API参数从checkVertex改为check_vertex (HugeGraph-81)
BUG修复
- hasId()无法正确匹配LongId(HugeGraph-1083)
Core
功能更新
- RocksDB支持常用配置项(HugeGraph-1068)
- 支持插入、删除、更新等操作的限速(HugeGraph-1071)
- 支持RocksDB导入sst文件方案(HugeGraph-1077)
- 增加MySQL后端存储(HugeGraph-1091)
- 增加Palo后端存储(HugeGraph-1092)
- 增加开关:支持是否构建顶点/边的label index(HugeGraph-1085)
- 支持API分页获取数据(HugeGraph-1105)
- RocksDB配置的数据存放目录如果不存在则自动创建(HugeGraph-1135)
- 增加高级遍历函数shortest path、K-neighbor,K-out和按id列表批量查询顶点(HugeGraph-944)
- init-store.sh增加超时重试机制(HugeGraph-1150)
- 将边表拆分两个表:OUT表、IN表(HugeGraph-1002)
- 限制顶点ID最大长度为128字节(HugeGraph-1168)
- Cassandra通过压缩数据(可配置snappy、lz4)进行优化(HugeGraph-428)
- 支持IN和OR操作(HugeGraph-137)
- 支持RocksDB并行写多个磁盘(HugeGraph-1177)
- MySQL通过批量插入进行性能优化(HugeGraph-1188)
BUG修复
- Kryo系列化多线程时异常(HugeGraph-1066)
- RocksDB索引内容中重复写了两次elem-id(HugeGraph-1094)
- SnowflakeIdGenerator.instance在多线程环境下可能会初始化多个实例(HugeGraph-1095)
- 如果查询边的顶点但顶点不存在时,异常信息不够明确(HugeGraph-1101)
- RocksDB配置了多个图时,init-store失败(HugeGraph-1151)
- 无法支持 Date 类型的属性值(HugeGraph-1165)
- 创建了系统内部索引,但无法根据其进行搜索(HugeGraph-1167)
- 拆表后根据label删除边时,edge-in表中的记录未被删除成功(HugeGraph-1182)
测试
- 增加配置项:vertex.force_id_string,跑 tinkerpop 测试时打开(HugeGraph-1069)
内部修改
- common库OptionChecker增加allowValues()函数用于枚举值(HugeGraph-1075)
- 清理无用、版本老旧的依赖包,减少打包的压缩包的大小(HugeGraph-1078)
- HugeConfig通过文件路径构造时,无法检查多次配置的配置项的值(HugeGraph-1079)
- Server启动时可以支持智能分配最大内存(HugeGraph-1154)
- 修复Mac OS因为不支持free命令导致无法启动server的问题(HugeGraph-1154)
- 修改配置项的注册方式为字符串式,避免直接依赖Backend包(HugeGraph-1171)
- 增加StoreDumper工具以查看后端存储的数据内容(HugeGraph-1172)
- Jenkins把所有与内部服务器有关的构建机器信息都参数化传入(HugeGraph-1179)
- 将RestClient移到common模块,令server和client都依赖common(HugeGraph-1183)
- 增加配置项dump工具ConfDumper(HugeGraph-1193)
9.10 - HugeGraph 0.4.4 Release Notes
API & Java Client
功能更新
- HugeGraph-Server支持WebSocket,能用Gremlin-Console连接使用;并支持直接编写groovy脚本调用Core的代码(HugeGraph-977)
- 适配Schema-id(HugeGraph-1038)
BUG修复
- hugegraph-0.3.3:删除vertex的属性,body中properties=null,返回500,空指针(HugeGraph-950)
- hugegraph-0.3.3: graph.schema().getVertexLabel() 空指针(HugeGraph-955)
- HugeGraph-Client 中顶点和边的属性集合不是线程安全的(HugeGraph-1013)
- 批量操作的异常信息无法打印(HugeGraph-1013)
- 异常message提示可读性太差,都是用propertyKey的id显示,对于用户来说无法立即识别(HugeGraph-1055)
- 批量新增vertex实体,有一个body体为null,返回500,空指针(HugeGraph-1056)
- 追加属性body体中只包含properties,功能出现回退,抛出异常The label of vertex can’t be null(HugeGraph-1057)
- HugeGraph-Client适配:PropertyKey的DateType中Timestamp替换成Date(HugeGraph-1059)
- 创建IndexLabel时baseValue为空会报出500错误(HugeGraph-1061)
Core
功能更新
- 实现上层独立事务管理,并兼容tinkerpop事务规范(HugeGraph-918、HugeGraph-941)
- 完善memory backend,可以通过API正确访问,且适配了tinkerpop事务(HugeGraph-41)
- 增加RocksDB后端存储驱动框架(HugeGraph-929)
- RocksDB数字索引range-query实现(HugeGraph-963)
- 为所有的schema增加了id,并将各表原依赖name的列也换成id(HugeGraph-589)
- 填充query key-value条件时,value的类型如果不匹配key定义的类型时需要转换为该类型(HugeGraph-964)
- 统一各后端的offset、limit实现(HugeGraph-995)
- 查询顶点、边时,Core支持迭代方式返回结果,而非一次性载入内存(HugeGraph-203)
- memory backend支持range query(HugeGraph-967)
- memory backend的secondary的支持方式从遍历改为IdQuery(HugeGraph-996)
- 联合索引支持复杂的(只要逻辑上可以查都支持)多种索引组合查询(HugeGraph-903)
- Schema中增加存储用户数据的域(map)(HugeGraph-902)
- 统一ID的解析及系列化(包括API及Backend)(HugeGraph-965)
- RocksDB没有keyspace概念,需要完善对多图实例的支持(HugeGraph-973)
- 支持Cassandra设置连接用户名密码(HugeGraph-999)
- Schema缓存支持缓存所有元数据(get-all-schema)(HugeGraph-1037)
- 目前依然保持schema对外暴露name,暂不直接使用schema id(HugeGraph-1032)
- 用户传入ID的策略的修改为支持String和Number(HugeGraph-956)
BUG修复
- 删除旧的前缀indexLabel时数据库中的schemaLabel对象还有残留(HugeGraph-969)
- HugeConfig解析时共用了公共的Option,导致不同graph的配置项有覆盖(HugeGraph-984)
- 数据库数据不兼容时,提示更加友好的异常信息(HugeGraph-998)
- 支持Cassandra设置连接用户名密码(HugeGraph-999)
- RocksDB deleteRange end溢出后触发RocksDB assert错误(HugeGraph-971)
- 允许根据null值id进行查询顶点/边,返回结果为空集合(HugeGraph-1045)
- 内存中存在部分更新数据未提交时,搜索结果不对(HugeGraph-1046)
- g.V().hasLabel(XX)传入不存在的label时报错: Internal Server Error and Undefined property key: ‘~label’(HugeGraph-1048)
- gremlin获取的的schema只剩下名称字符串(HugeGraph-1049)
- 大量数据情况下无法进行count操作(HugeGraph-1051)
- RocksDB持续插入6~8千万条边时卡住(HugeGraph-1053)
- 整理属性类型的支持,并在BinarySerializer中使用二进制格式系列化属性值(HugeGraph-1062)
测试
- 增加tinkerpop的performance测试(HugeGraph-987)
内部修改
- HugeFactory打开同一个图(name相同者)时,共用HugeGraph对象即可(HugeGraph-983)
- 规范索引类型命名secondary、range、search(HugeGraph-991)
- 数据库数据不兼容时,提示更加友好的异常信息(HugeGraph-998)
- IO部分的 gryo 和 graphson 的module分开(HugeGraph-1041)
- 增加query性能测试到PerfExample中(HugeGraph-1044)
- 关闭gremlin-server的metric日志(HugeGraph-1050)
9.11 - HugeGraph 0.3.3 Release Notes
API & Java Client
功能更新
- 为vertex-label和edge-label增加可空属性集合,允许在create和append时指定(HugeGraph-245)
- 配合core的功能为用户提供tinkerpop variables RESTful API(HugeGraph-396)
- 支持顶点/边属性的更新和删除(HugeGraph-894)
- 支持顶点/边的条件查询(HugeGraph-919)
BUG修复
- HugeGraph-API接收的RequestBody为null或"“时抛出空指针异常(HugeGraph-795)
- 为HugeGraph-API添加输入参数检查,避免抛出空指针异常(HugeGraph-796 ~ HugeGraph-798,HugeGraph-802,HugeGraph-808 ~ HugeGraph-814,HugeGraph-817,HugeGraph-823,HugeGraph-860)
- 创建缺失outV-label 或者 inV-label的实体边,依然能够被创建成功,不符合需求(HugeGraph-835)
- 创建vertex-label和edge-label时可以任意传入index-names(HugeGraph-837)
- 创建index,base-type=“VERTEX”等值(期望VL、EL),返回500(HugeGraph-846)
- 创建index,base-type和base-value不匹配,提示不友好(HugeGraph-848)
- 删除已经不存在的两个实体之间的关系,schema返回204,顶点和边类型的则返回404(期望统一为404)(HugeGraph-853,HugeGraph-854)
- 给vertex-label追加属性,缺失id-strategy,返回信息有误(HugeGraph-861)
- 给edge-label追加属性,name缺失,提示信息有误(HugeGraph-862)
- 给edge-label追加属性,source-label为“null”,提示信息有误(HugeGraph-863)
- 查询时的StringId如果为空字符串应该抛出异常(HugeGraph-868)
- 通Rest API创建两个顶点之间的边,在studio中通过g.V()则刚新创建的边则不显示,g.E()则能够显示新创建的边(HugeGraph-869)
- HugeGraph-Server的内部错误500,不应该将stack trace返回给Client(HugeGraph-879)
- addEdge传入空的id字符串时会抛出非法参数异常(HugeGraph-885)
- HugeGraph-Client 的 Gremlin 查询结果在解析 Path 时,如果不包含Vertex/Edge会反序列化异常(HugeGraph-891)
- 枚举HugeKeys的字符串变成小写字母加下划线,导致API序列化时字段名与类中变量名不一致,进而序列化失败(HugeGraph-896)
- 增加边到不存在的顶点时返回404(期望400)(HugeGraph-922)
Core
功能更新
- 支持对顶点/边属性(包括索引列)的更新操作(HugeGraph-369)
- 索引field为空或者空字符串的支持(hugegraph-553和hugegraph-288)
- vertex/edge的属性一致性保证推迟到实际要访问属性时(hugegraph-763)
- 增加ScyllaDB后端驱动(HugeGraph-772)
- 支持tinkerpop的hasKey、hasValue查询(HugeGraph-826)
- 支持tinkerpop的variables功能(HugeGraph-396)
- 以“~”为开头的为系统隐藏属性,用户不可以创建(HugeGraph-842)
- 增加Backend Features以兼容不同后端的特性(HugeGraph-844)
- 对mutation的update可能出现的操作不直接抛错,进行细化处理(HugeGraph-887)
- 对append到vertex-label/edge-label的property检查,必须是nullable的(HugeGraph-890)
- 对于按照id查询,当有的id不存在时,返回其余存在的对象,而非直接抛异常(HugeGraph-900)
BUG修复
- Vertex.edges(Direction.BOTH,…) assert error(HugeGraph-661)
- 无法支持在addVertex函数中对同一property(single)多次赋值(HugeGraph-662)
- 更新属性时不涉及更新的索引列会丢失(HugeGraph-801)
- GraphTransaction中的ConditionQuery需要索引查询时,没有触发commit,导致查询失败(HugeGraph-805)
- Cassandra不支持query offset,查询时limit=offset+limit取回所有记录后过滤(HugeGraph-851)
- 多个插入操作加上一个删除操作,插入操作会覆盖删除操作(HugeGraph-857)
- 查询时的StringId如果为空字符串应该抛出异常(HugeGraph-868)
- 元数据schema方法只返回 hidden 信息(HugeGraph-912)
测试
- tinkerpop的structure和process测试使用不同的keyspace(HugeGraph-763)
- 将tinkerpop测试和unit测试添加到流水线release-after-merge中(HugeGraph-763)
- jenkins脚本分离各阶段子脚本,修改项目中的子脚本即可生效构建(HugeGraph-800)
- 增加clear backends功能,在tinkerpop suite运行完成后清除后端(HugeGraph-852)
- 增加BackendMutation的测试(HugeGraph-801)
- 多线程操作图时可能抛出NoHostAvailableException异常(HugeGraph-883)
内部修改
- 调整HugeGraphServer和HugeGremlinServer启动时JVM的堆内存初始为256M,最大为2048M(HugeGraph-218)
- 创建Cassandra Table时,使用schemaBuilder代替字符串拼接(hugegraph-773)
- 运行测试用例时如果初始化图失败(比如数据库连接不上),clear()报错(HugeGraph-910)
- Example抛异常 Need to specify a readable config file rather than…(HugeGraph-921)
- HugeGraphServer和HugeGreminServer的缓存保持同步(HugeGraph-569)
9.12 - HugeGraph 0.2 Release Notes
API & Java Client
功能更新
0.2版实现了图数据库基本功能,提供如下功能:
元数据(Schema)
顶点类型(Vertex Label)
- 创建顶点类型
- 删除顶点类型
- 查询顶点类型
- 增加顶点类型的属性
边类型(Edge Label)
- 创建边类型
- 删除边类型
- 查询边类型
- 增加边类型的属性
属性(Property Key)
- 创建属性
- 删除属性
- 查询属性
索引(Index Label)
- 创建索引
- 删除索引
- 查询索引
元数据检查
- 元数据依赖的其它元数据检查(如Vertex Label依赖Property Key)
- 数据依赖的元数据检查(如Vertex依赖Vertex Label)
图数据
顶点(Vertex)
增加顶点
删除顶点
增加顶点属性
删除顶点属性(必须为非索引列)
批量插入顶点
查询
批量查询
顶点ID策略
- 用户指定ID(字符串)
- 用户指定某些属性组合作为ID(拼接为可见字符串)
- 自动生成ID
边(Edge)
- 增加边
- 增加多条同类型边到指定的两个节点(SortKey)
- 删除边
- 增加边属性
- 删除边属性(必须为非索引列)
- 批量插入边
- 查询
- 批量查询
顶点/边属性
属性类型支持
- text
- boolean
- byte、blob
- int、long
- float、double
- timestamp
- uuid
支持单值属性
支持多值属性:List、Set(注意:非嵌套属性)
事务
- 原子性级别保证(依赖后端)
- 自动提交事务
- 手动提交事务
- 并行事务
索引
索引类型
- 二级索引
- 范围索引(数字类型)
索引操作
- 为指定类型的顶点/边创建单列索引(不支持List或Set列创建索引)
- 为指定类型的顶点/边创建复合索引(不支持List或Set列创建索引,复合索引为前缀索引)
- 删除指定类型顶点/边的索引(部分或全部索引均可)
- 重建指定类型顶点/边的索引(部分或全部索引均可)
查询/遍历
列出所有元数据、图数据(支持Limit,不支持分页)
根据ID查询元数据、图数据
根据指定属性的值查询图数据
根据指定属性的值范围查询图数据(属性必须为数字类型)
根据指定顶点/边类型、指定属性的值查询顶点/边
根据指定顶点/边类型、指定属性的值范围查询顶点(属性必须为数字类型)
根据顶点类型(Vertex Label)查询顶点
根据边类型(Edge Label)查询边
根据顶点查询边
- 查询顶点的所有边
- 查询顶点的指定方向边(出边、入边)
- 查询顶点的指定方向、指定类型边
- 查询两个顶点的同类型边中的某条边(SortKey)
标准Gremlin遍历
缓存
可缓存内容
- 元数据缓存
- 顶点缓存
缓存特性
- LRU策略
- 高性能并发访问
- 支持超时过期机制
接口(RESTful API)
- 版本号接口
- 图实例接口
- 元数据接口
- 图数据接口
- Gremlin接口
更多细节详见API文档
后端支持
支持Cassandra后端
- 持久化
- CQL3
- 集群
支持Memory后端(仅用于测试)
- 非持久化
- 部分特性无法支持(如:更新边属性、根据边类型查询边)
其它
支持配置项
- 后端存储类型
- 序列化方式
- 缓存参数
支持多图实例
- 静态方式(增加多个图配置文件)
版本检查
- 内部依赖包匹配版本检查
- API匹配版本检查
9.13 - HugeGraph 0.2.4 Release Notes
API & Java Client
功能更新
元数据(Schema)相关
BUG修复
- Vertex Label为非primary-key id策略应该允许属性为空(HugeGraph-651)
- Gremlin-Server 序列化的 EdgeLabel 仅有一个directed 属性,应该打印完整的schema描述(HugeGraph-680)
- 创建IndexLabel时使用不存在的属性抛出空指针异常,应该抛非法参数异常(HugeGraph-682)
- 创建schema如果已经存在并指定了ifNotExist时,结果应该返回原来的对象(HugeGraph-694)
- 由于EdgeLabel的Frequency默认为null以及不允许修改特性,导致Append操作传递null值在API层反序列化失败(HugeGraph-729)
- 增加对schema名称的正则检查配置项,默认不允许为全空白字符(HugeGraph-727)
- 中文名的schema在前端显示为乱码(HugeGraph-711)
图数据(Vertex、Edge)相关
功能更新
- DataType支持Array,并且List类型除了一个一个添加object,也需要支持直接赋值List对象(HugeGraph-719)
- 自动生成的顶点id由十进制改为十六进制(字符串存储时)(HugeGraph-785)
BUG修复
- HugeGraph-API的VertexLabel/EdgeLabel API未提供eliminate接口(HugeGraph-614)
- 增加非primary-key id策略的顶点时,如果属性为空无法插入到数据库中(HugeGraph-652)
- 使用HugeGraph-Client的gremlin发送无返回值groovy请求时,由于gremlin-server将无返回值序列化为null,导致前端迭代结果集时出现空指针异常(HugeGraph-664)
- RESTful API在没有找到对应id的vertex/edge时返回500(HugeGraph-734)
- HugeElement/HugeProperty的equals()与tinkerpop不兼容(HugeGraph-653)
- HugeEdgeProperty的property的equals函数与tinkerpop兼容 (HugeGraph-740)
- HugeElement/HugeVertexProperty的hashcode函数与tinkerpop不兼容(HugeGraph-728)
- HugeVertex/HugeEdge的toString函数与tinkerpop不兼容(HugeGraph-665)
- 与tinkerpop的异常不兼容,包括IllegalArgumentsException和UnsupportedOperationException(HugeGraph-667)
- 通过id无法找到element时,抛出的异常类型与tinkerpop不兼容(HugeGraph-689)
- vertex.addEdge没有检查properties的数目是否为2的倍数(HugeGraph-716)
- vertex.addEdge()时,assignId调用时机太晚,导致vertex的Set
中有重复的edge(HugeGraph-666) - 查询时包含大于等于三层逻辑嵌套时,会抛出ClassCastException,现改成抛出非法参数异常(HugeGraph-481)
- 边查询如果同时包含source-vertex/direction和property作为条件,查询结果错误(HugeGraph-749)
- HugeGraph-Server 在运行时如果 cassandra 宕掉,插入或查询操作时会抛出DataStax的异常以及详细的调用栈(HugeGraph-771)
- 删除不存在的 indexLabel 时会抛出异常,而删除其他三种元数据(不存在的)则不会(HugeGraph-782)
- 当传给EdgeApi的源顶点或目标顶点的id非法时,会因为查询不到该顶点向客户端返回404状态码(HugeGraph-784)
- 提供内部使用获取元数据的接口,使SchemaManager仅为外部使用,当获取不存在的schema时抛出NotFoundException异常(HugeGraph-743)
- HugeGraph-Client 创建/添加/移除 元数据都应该返回来自服务端的结果(HugeGraph-760)
- 创建HugeGraph-Client时如果输入了错误的主机会导致进程阻塞,无法响应(HugeGraph-718)
查询、索引、缓存相关
功能更新
- 缓存更新更加高效的锁方案(HugeGraph-555)
- 索引查询增加支持只有一个元素的IN语句(原来仅支持EQ)(HugeGraph-739)
BUG修复
- 防止请求数据量过大时服务本身hang住(HugeGraph-777)
其它
功能更新
- 使Init-Store仅用于初始化数据库,清空后端由独立脚本实现(HugeGraph-650)
BUG修复
- 单元测试跑完后在测试机上遗留了临时的keyspace(HugeGraph-611)
- Cassandra的info日志信息过多,将大部分修改为debug级别(HugeGraph-722)
- EventHub.containsListener(String event)判断逻辑有遗漏(HugeGraph-732)
- EventHub.listeners/unlisten(String event)当没有对应event的listener时会抛空指针异常(HugeGraph-733)
测试
Tinkerpop合规测试
- 增加自定义ignore机制,规避掉暂时不需要加入持续集成的测试用例(HugeGraph-647)
- 为TestGraph注册GraphSon和Kryo序列化器,实现 IdGenerator$StringId 的 graphson-v1、graphson-v2 和 Kryo的序列化与反序列化(HugeGraph-660)
- 增加了可配置的测试用例过滤器,使得tinkerpop测试可以用在开发分支和发布分支的回归测试中
- 将tinkerpop测试通过配置文件,加入到回归测试中
单元测试
- 增加Cache及Event的单元测试(HugeGraph-659)
- HugeGraph-Client 增加API的测试(99个)
- HugeGraph-Client 增加单元测试,包括RestResult反序列化的单测(12个)
内部修改
- 改进LOG变量方面代码(HugeGraph-623/HugeGraph-631)
- License格式调整(HugeGraph-625)
- 将序列化器中持有的graph抽离,要用到graph的函数通过传参数实现 (HugeGraph-750)
10 - Contribution Guidelines
10.1 - 如何参与 HugeGraph 社区
TODO: translate this article to Chinese
Thanks for taking the time to contribute! As an open source project, HugeGraph is looking forward to be contributed from everyone, and we are also grateful to all the contributors.
The following is a contribution guide for HugeGraph:
1. Preparation
We can contribute by reporting issues, submitting code patches or any other feedback.
Before submitting the code, we need to do some preparation:
Sign up or login to GitHub: https://github.com
Fork HugeGraph repo from GitHub: https://github.com/apache/incubator-hugegraph/fork
Clone code from fork repo to local: https://github.com/${GITHUB_USER_NAME}/hugegraph
# clone code from remote to local repo
+
推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。
3.2 总结
HugeGraph 目前支持 Gremlin
的语法,用户可以通过 Gremlin / REST-API
实现各种查询需求。
8 - PERFORMANCE
8.1 - HugeGraph BenchMark Performance
1 测试环境
1.1 硬件信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD
1.2 软件信息
1.2.1 测试用例
测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:
Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交
Single Insertion,单条插入,每个顶点或者每条边立即提交
Query,主要是图数据库的基本查询操作:
- Find Neighbors,查询所有顶点的邻居
- Find Adjacent Nodes,查询所有边的邻接顶点
- Find Shortest Path,查询第一个顶点到100个随机顶点的最短路径
Clustering,基于Louvain Method的社区发现算法
1.2.2 测试数据集
测试使用人造数据和真实数据
MIW、SIW和QW使用SNAP数据集
CW使用LFR-Benchmark generator生成的人造数据
本测试用到的数据集规模
名称 vertex数目 edge数目 文件大小 email-enron.txt 36,691 367,661 4MB com-youtube.ungraph.txt 1,157,806 2,987,624 38.7MB amazon0601.txt 403,393 3,387,388 47.9MB com-lj.ungraph.txt 3997961 34681189 479MB
1.3 服务配置
HugeGraph版本:0.5.6,RestServer和Gremlin Server和backends都在同一台服务器上
- RocksDB版本:rocksdbjni-5.8.6
Titan版本:0.5.4, 使用thrift+Cassandra模式
- Cassandra版本:cassandra-3.10,commit-log 和 data 共用SSD
Neo4j版本:2.0.1
graphdb-benchmark适配的Titan版本为0.5.4
2 测试结果
2.1 Batch插入性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) com-lj.ungraph(3000w) HugeGraph 0.629 5.711 5.243 67.033 Titan 10.15 108.569 150.266 1217.944 Neo4j 3.884 18.938 24.890 281.537
说明
- 表头"()“中数据是数据规模,以边为单位
- 表中数据是批量插入的时间,单位是s
- 例如,HugeGraph使用RocksDB插入amazon0601数据集的300w条边,花费5.711s
结论
- 批量插入性能 HugeGraph(RocksDB) > Neo4j > Titan(thrift+Cassandra)
2.2 遍历性能
2.2.1 术语说明
- FN(Find Neighbor), 遍历所有vertex, 根据vertex查邻接edge, 通过edge和vertex查other vertex
- FA(Find Adjacent), 遍历所有edge,根据edge获得source vertex和target vertex
2.2.2 FN性能
Backend email-enron(3.6w) amazon0601(40w) com-youtube.ungraph(120w) com-lj.ungraph(400w) HugeGraph 4.072 45.118 66.006 609.083 Titan 8.084 92.507 184.543 1099.371 Neo4j 2.424 10.537 11.609 106.919
说明
- 表头”()“中数据是数据规模,以顶点为单位
- 表中数据是遍历顶点花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有顶点,并查找邻接边和另一顶点,总共耗时45.118s
2.2.3 FA性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) com-lj.ungraph(3000w) HugeGraph 1.540 10.764 11.243 151.271 Titan 7.361 93.344 169.218 1085.235 Neo4j 1.673 4.775 4.284 40.507
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是遍历边花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有边,并查询每条边的两个顶点,总共耗时10.764s
结论
- 遍历性能 Neo4j > HugeGraph(RocksDB) > Titan(thrift+Cassandra)
2.3 HugeGraph-图常用分析方法性能
术语说明
- FS(Find Shortest Path), 寻找最短路径
- K-neighbor,从起始vertex出发,通过K跳边能够到达的所有顶点, 包括1, 2, 3…(K-1), K跳边可达vertex
- K-out, 从起始vertex出发,恰好经过K跳out边能够到达的顶点
FS性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) com-lj.ungraph(3000w) HugeGraph 0.494 0.103 3.364 8.155 Titan 11.818 0.239 377.709 575.678 Neo4j 1.719 1.800 1.956 8.530
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是找到从第一个顶点出发到达随机选择的100个顶点的最短路径的时间,单位是s
- 例如,HugeGraph使用RocksDB后端在图amazon0601中查找第一个顶点到100个随机顶点的最短路径,总共耗时0.103s
结论
- 在数据规模小或者顶点关联关系少的场景下,HugeGraph性能优于Neo4j和Titan
- 随着数据规模增大且顶点的关联度增高,HugeGraph与Neo4j性能趋近,都远高于Titan
K-neighbor性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.031s 0.033s 0.048s 0.500s 11.27s OOM v111 时间 0.027s 0.034s 0.115 1.36s OOM – v1111 时间 0.039s 0.027s 0.052s 0.511s 10.96s OOM
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
K-out性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.054s 0.057s 0.109s 0.526s 3.77s OOM 度 10 133 2453 50,830 1,128,688 v111 时间 0.032s 0.042s 0.136s 1.25s 20.62s OOM 度 10 211 4944 113150 2,629,970 v1111 时间 0.039s 0.045s 0.053s 1.10s 2.92s OOM 度 10 140 2555 50825 1,070,230
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
结论
- FS场景,HugeGraph性能优于Neo4j和Titan
- K-neighbor和K-out场景,HugeGraph能够实现在5度范围内秒级返回结果
2.4 图综合性能测试-CW
数据库 规模1000 规模5000 规模10000 规模20000 HugeGraph(core) 20.804 242.099 744.780 1700.547 Titan 45.790 820.633 2652.235 9568.623 Neo4j 5.913 50.267 142.354 460.880
说明
- “规模"以顶点为单位
- 表中数据是社区发现完成需要的时间,单位是s,例如HugeGraph使用RocksDB后端在规模10000的数据集,社区聚合不再变化,需要耗时744.780s
- CW测试是CRUD的综合评估
- 该测试中HugeGraph跟Titan一样,没有通过client,直接对core操作
结论
- 社区聚类算法性能 Neo4j > HugeGraph > Titan
8.2 - HugeGraph-API Performance
HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:
- 顶点/边的单条插入
- 顶点/边的批量插入
- 顶点/边的查询
HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:
之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况
8.2.1 - v0.5.6 Stand-alone(RocksDB)
1 测试环境
被压机器信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD,2.7T HDD
- 起压力机器信息:与被压机器同配置
- 测试工具:apache-Jmeter-2.5.1
注:起压机器和被压机器在同一机房
2 测试说明
2.1 名词定义(时间的单位均为ms)
- Samples – 本次场景中一共完成了多少个线程
- Average – 平均响应时间
- Median – 统计意义上面的响应时间的中值
- 90% Line – 所有线程中90%的线程的响应时间都小于xx
- Min – 最小响应时间
- Max – 最大响应时间
- Error – 出错率
- Throughput – 吞吐量
- KB/sec – 以流量做衡量的吞吐量
2.2 底层存储
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
3 性能结果总结
- HugeGraph单条插入顶点和边的速度在每秒1w左右
- 顶点和边的批量插入速度远大于单条插入速度
- 按id查询顶点和边的并发度可达到13000以上,且请求的平均延时小于50ms
4 测试结果及分析
4.1 batch插入
4.1.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
持续时间:5min
顶点的最大插入速度:
####### 结论:
- 并发2200,顶点的吞吐量是2026.8,每秒可处理的数据:2026.8*200=405360/s
边的最大插入速度
####### 结论:
- 并发900,边的吞吐量是776.9,每秒可处理的数据:776.9*500=388450/s
4.2 single插入
4.2.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
顶点的单条插入
####### 结论:
- 并发11500,吞吐量为10730,顶点的单条插入并发能力为11500
边的单条插入
####### 结论:
- 并发9000,吞吐量是8418,边的单条插入并发能力为9000
4.3 按id查询
4.3.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
顶点的按id查询
####### 结论:
- 并发14000,吞吐量是12663,顶点的按id查询的并发能力为14000,平均延时为44ms
边的按id查询
####### 结论:
- 并发13000,吞吐量是12225,边的按id查询的并发能力为13000,平均延时为12ms
8.2.2 - v0.5.6 Cluster(Cassandra)
1 测试环境
被压机器信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD,2.7T HDD
- 起压力机器信息:与被压机器同配置
- 测试工具:apache-Jmeter-2.5.1
注:起压机器和被压机器在同一机房
2 测试说明
2.1 名词定义(时间的单位均为ms)
- Samples – 本次场景中一共完成了多少个线程
- Average – 平均响应时间
- Median – 统计意义上面的响应时间的中值
- 90% Line – 所有线程中90%的线程的响应时间都小于xx
- Min – 最小响应时间
- Max – 最大响应时间
- Error – 出错率
- Throughput – 吞吐量
- KB/sec – 以流量做衡量的吞吐量
2.2 底层存储
后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。
3 性能结果总结
- HugeGraph单条插入顶点和边的速度分别为9000和4500
- 顶点和边的批量插入速度分别为5w/s和15w/s,远大于单条插入速度
- 按id查询顶点和边的并发度可达到12000以上,且请求的平均延时小于70ms
4 测试结果及分析
4.1 batch插入
4.1.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
持续时间:5min
顶点的最大插入速度:
####### 结论:
- 并发3500,顶点的吞吐量是261,每秒可处理的数据:261*200=52200/s
边的最大插入速度
####### 结论:
- 并发1000,边的吞吐量是323,每秒可处理的数据:323*500=161500/s
4.2 single插入
4.2.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
顶点的单条插入
####### 结论:
- 并发9000,吞吐量为8400,顶点的单条插入并发能力为9000
边的单条插入
####### 结论:
- 并发4500,吞吐量是4160,边的单条插入并发能力为4500
4.3 按id查询
4.3.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
顶点的按id查询
####### 结论:
- 并发14500,吞吐量是13576,顶点的按id查询的并发能力为14500,平均延时为11ms
边的按id查询
####### 结论:
- 并发12000,吞吐量是10688,边的按id查询的并发能力为12000,平均延时为63ms
8.2.3 - v0.4.4
1 测试环境
被压机器信息
机器编号 CPU Memory 网卡 磁盘 1 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz 61G 1000Mbps 1.4T HDD 2 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD,2.7T HDD
- 起压力机器信息:与编号 1 机器同配置
- 测试工具:apache-Jmeter-2.5.1
注:起压机器和被压机器在同一机房
2 测试说明
2.1 名词定义(时间的单位均为ms)
- Samples – 本次场景中一共完成了多少个线程
- Average – 平均响应时间
- Median – 统计意义上面的响应时间的中值
- 90% Line – 所有线程中90%的线程的响应时间都小于xx
- Min – 最小响应时间
- Max – 最大响应时间
- Error – 出错率
- Throughput – 吞吐量
- KB/sec – 以流量做衡量的吞吐量
2.2 底层存储
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
3 性能结果总结
- HugeGraph每秒能够处理的请求数目上限是7000
- 批量插入速度远大于单条插入,在服务器上测试结果达到22w edges/s,37w vertices/s
- 后端是RocksDB,增大CPU数目和内存大小可以增大批量插入的性能。CPU和内存扩大一倍,性能增加45%-60%
- 批量插入场景,使用SSD替代HDD,性能提升较小,只有3%-5%
4 测试结果及分析
4.1 batch插入
4.1.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
持续时间:5min
顶点和边的最大插入速度(高性能服务器,使用SSD存储RocksDB数据):
结论:
- 并发1000,边的吞吐量是是451,每秒可处理的数据:451*500条=225500/s
- 并发2000,顶点的吞吐量是1842.4,每秒可处理的数据:1842.4*200=368480/s
1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)
结论:
- 同样使用HDD硬盘,CPU和内存增加了1倍
- 边:吞吐量从268提升至426,性能提升了约60%
- 顶点:吞吐量从1263.8提升至1842.4,性能提升了约45%
2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)
结论:
- 边:使用SSD吞吐量451.7,使用HDD吞吐量426.6,性能提升5%
- 顶点:使用SSD吞吐量1842.4,使用HDD吞吐量1794,性能提升约3%
3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)
结论:
- 顶点:1000并发,响应时间7ms和1500并发响应时间1028ms差距悬殊,且吞吐量一直保持在1300左右,因此拐点数据应该在1300 ,且并发1300时,响应时间已达到22ms,在可控范围内,相比HugeGraph 0.2(1000并发:平均响应时间8959ms),处理能力出现质的飞跃;
- 边:从1000并发到2000并发,处理时间过长,超过3s,且吞吐量几乎在270左右浮动,因此继续增大并发线程数吞吐量不会再大幅增长,270 是一个拐点,跟HugeGraph 0.2版本(1000并发:平均响应时间31849ms)相比较,处理能力提升非常明显;
4.2 single插入
4.2.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
结论:
- 顶点:
- 4000并发:正常,无错误率,平均耗时小于1ms, 6000并发无错误,平均耗时5ms,在可接受范围内;
- 8000并发:存在0.01%的错误,已经无法处理,出现connection timeout错误,顶峰应该在7000左右
- 边:
- 4000并发:响应时间1ms,6000并发无任何异常,平均响应时间8ms,主要差异在于 IO network recv和send以及CPU);
- 8000并发:存在0.01%的错误率,平均耗15ms,拐点应该在7000左右,跟顶点结果匹配;
8.2.4 - v0.2
1 测试环境
1.1 软硬件信息
起压和被压机器配置相同,基本参数如下:
CPU Memory 网卡 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz 61G 1000Mbps
测试工具:apache-Jmeter-2.5.1
1.2 服务配置
- HugeGraph版本:0.2
- 后端存储:使用服务内嵌的cassandra-3.10,单点部署;
- 后端配置修改:修改了cassandra.yaml文件中的以下两个属性,其余选项均保持默认
batch_size_warn_threshold_in_kb: 1000
+ batch_size_fail_threshold_in_kb: 1000
+
- HugeGraphServer 与 HugeGremlinServer 与cassandra都在同一机器上启动,server 相关的配置文件除主机和端口有修改外,其余均保持默认。
1.3 名词解释
- Samples – 本次场景中一共完成了多少个线程
- Average – 平均响应时间
- Median – 统计意义上面的响应时间的中值
- 90% Line – 所有线程中90%的线程的响应时间都小于xx
- Min – 最小响应时间
- Max – 最大响应时间
- Error – 出错率
- Throughput – 吞吐量Â
- KB/sec – 以流量做衡量的吞吐量
注:时间的单位均为ms
2 测试结果
2.1 schema
Label Samples Average Median 90%Line Min Max Error% Throughput KB/sec property_keys 331000 1 1 2 0 172 0.00% 920.7/sec 178.1 vertex_labels 331000 1 2 2 1 126 0.00% 920.7/sec 193.4 edge_labels 331000 2 2 3 1 158 0.00% 920.7/sec 242.8
结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力
2.2 single 插入
2.2.1 插入速率测试
压力参数
测试方法:固定并发量,测试server和后端的处理速率
- 并发量:1000
- 持续时间:5min
性能指标
Label Samples Average Median 90%Line Min Max Error% Throughput KB/sec single_insert_vertices 331000 0 1 1 0 21 0.00% 920.7/sec 234.4 single_insert_edges 331000 2 2 3 1 53 0.00% 920.7/sec 309.1
结论
- 顶点:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
- 边:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
2.2.2 压力上限测试
测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
性能指标
Concurrency Samples Average Median 90%Line Min Max Error% Throughput KB/sec 2000(vertex) 661916 1 1 1 0 3012 0.00% 1842.9/sec 469.1 4000(vertex) 1316124 13 1 14 0 9023 0.00% 3673.1/sec 935.0 5000(vertex) 1468121 1010 1135 1227 0 9223 0.06% 4095.6/sec 1046.0 7000(vertex) 1378454 1617 1708 1886 0 9361 0.08% 3860.3/sec 987.1 2000(edge) 629399 953 1043 1113 1 9001 0.00% 1750.3/sec 587.6 3000(edge) 648364 2258 2404 2500 2 9001 0.00% 1810.7/sec 607.9 4000(edge) 649904 1992 2112 2211 1 9001 0.06% 1812.5/sec 608.5
结论
- 顶点:
- 4000并发:正常,无错误率,平均耗时13ms;
- 5000并发:每秒处理5000个数据的插入,就会存在0.06%的错误,应该已经处理不了了,顶峰应该在4000
- 边:
- 1000并发:响应时间2ms,跟2000并发的响应时间相差较多,主要是 IO network rec和send以及CPU几乎增加了一倍);
- 2000并发:每秒处理2000个数据的插入,平均耗时953ms,平均每秒处理1750个请求;
- 3000并发:每秒处理3000个数据的插入,平均耗时2258ms,平均每秒处理1810个请求;
- 4000并发:每秒处理4000个数据的插入,平均每秒处理1812个请求;
2.3 batch 插入
2.3.1 插入速率测试
压力参数
测试方法:固定并发量,测试server和后端的处理速率
- 并发量:1000
- 持续时间:5min
性能指标
Label Samples Average Median 90%Line Min Max Error% Throughput KB/sec batch_insert_vertices 37162 8959 9595 9704 17 9852 0.00% 103.4/sec 393.3 batch_insert_edges 10800 31849 34544 35132 435 35747 0.00% 28.8/sec 814.9
结论
- 顶点:平均响应时间为8959ms,处理时间过长。每个请求插入199条数据,平均每秒处理103个请求,则每秒平均总共处理的数据为199*131约等于2w条数据;
- 边:平均响应时间31849ms,处理时间过长。每个请求插入499个数据,平均每秒处理28个请求,则每秒平均总共处理的数据为28*499约等于13900条数据;
8.3 - HugeGraph-Loader Performance
使用场景
当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据
性能
测试均采用网址数据的边数据
RocksDB单机性能
- 关闭label index,22.8w edges/s
- 开启label index,15.3w edges/s
Cassandra集群性能
- 默认开启label index,6.3w edges/s
8.4 -
1 测试环境
1.1 硬件信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD
1.2 软件信息
1.2.1 测试用例
测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:
Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交
Single Insertion,单条插入,每个顶点或者每条边立即提交
Query,主要是图数据库的基本查询操作:
- Find Neighbors,查询所有顶点的邻居
- Find Adjacent Nodes,查询所有边的邻接顶点
- Find Shortest Path,查询第一个顶点到100个随机顶点的最短路径
Clustering,基于Louvain Method的社区发现算法
1.2.2 测试数据集
测试使用人造数据和真实数据
MIW、SIW和QW使用SNAP数据集
CW使用LFR-Benchmark generator生成的人造数据
本测试用到的数据集规模
名称 vertex数目 edge数目 文件大小 email-enron.txt 36,691 367,661 4MB com-youtube.ungraph.txt 1,157,806 2,987,624 38.7MB amazon0601.txt 403,393 3,387,388 47.9MB
1.3 服务配置
- HugeGraph版本:0.4.4,RestServer和Gremlin Server和backends都在同一台服务器上
- Cassandra版本:cassandra-3.10,commit-log 和data共用SSD
- RocksDB版本:rocksdbjni-5.8.6
- Titan版本:0.5.4, 使用thrift+Cassandra模式
graphdb-benchmark适配的Titan版本为0.5.4
2 测试结果
2.1 Batch插入性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) Titan 9.516 88.123 111.586 RocksDB 2.345 14.076 16.636 Cassandra 11.930 108.709 101.959 Memory 3.077 15.204 13.841
说明
- 表头"()“中数据是数据规模,以边为单位
- 表中数据是批量插入的时间,单位是s
- 例如,HugeGraph使用RocksDB插入amazon0601数据集的300w条边,花费14.076s,速度约为21w edges/s
结论
- RocksDB和Memory后端插入性能优于Cassandra
- HugeGraph和Titan同样使用Cassandra作为后端的情况下,插入性能接近
2.2 遍历性能
2.2.1 术语说明
- FN(Find Neighbor), 遍历所有vertex, 根据vertex查邻接edge, 通过edge和vertex查other vertex
- FA(Find Adjacent), 遍历所有edge,根据edge获得source vertex和target vertex
2.2.2 FN性能
Backend email-enron(3.6w) amazon0601(40w) com-youtube.ungraph(120w) Titan 7.724 70.935 128.884 RocksDB 8.876 65.852 63.388 Cassandra 13.125 126.959 102.580 Memory 22.309 207.411 165.609
说明
- 表头”()“中数据是数据规模,以顶点为单位
- 表中数据是遍历顶点花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有顶点,并查找邻接边和另一顶点,总共耗时65.852s
2.2.3 FA性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) Titan 7.119 63.353 115.633 RocksDB 6.032 64.526 52.721 Cassandra 9.410 102.766 94.197 Memory 12.340 195.444 140.89
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是遍历边花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有边,并查询每条边的两个顶点,总共耗时64.526s
结论
- HugeGraph RocksDB > Titan thrift+Cassandra > HugeGraph Cassandra > HugeGraph Memory
2.3 HugeGraph-图常用分析方法性能
术语说明
- FS(Find Shortest Path), 寻找最短路径
- K-neighbor,从起始vertex出发,通过K跳边能够到达的所有顶点, 包括1, 2, 3…(K-1), K跳边可达vertex
- K-out, 从起始vertex出发,恰好经过K跳out边能够到达的顶点
FS性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) Titan 11.333 0.313 376.06 RocksDB 44.391 2.221 268.792 Cassandra 39.845 3.337 331.113 Memory 35.638 2.059 388.987
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是找到从第一个顶点出发到达随机选择的100个顶点的最短路径的时间,单位是s
- 例如,HugeGraph使用RocksDB查找第一个顶点到100个随机顶点的最短路径,总共耗时2.059s
结论
- 在数据规模小或者顶点关联关系少的场景下,Titan最短路径性能优于HugeGraph
- 随着数据规模增大且顶点的关联度增高,HugeGraph最短路径性能优于Titan
K-neighbor性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.031s 0.033s 0.048s 0.500s 11.27s OOM v111 时间 0.027s 0.034s 0.115 1.36s OOM – v1111 时间 0.039s 0.027s 0.052s 0.511s 10.96s OOM
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
K-out性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.054s 0.057s 0.109s 0.526s 3.77s OOM 度 10 133 2453 50,830 1,128,688 v111 时间 0.032s 0.042s 0.136s 1.25s 20.62s OOM 度 10 211 4944 113150 2,629,970 v1111 时间 0.039s 0.045s 0.053s 1.10s 2.92s OOM 度 10 140 2555 50825 1,070,230
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
结论
- FS场景,HugeGraph性能优于Titan
- K-neighbor和K-out场景,HugeGraph能够实现在5度范围内秒级返回结果
2.4 图综合性能测试-CW
数据库 规模1000 规模5000 规模10000 规模20000 Titan 45.943 849.168 2737.117 9791.46 Memory(core) 41.077 1825.905 * * Cassandra(core) 39.783 862.744 2423.136 6564.191 RocksDB(core) 33.383 199.894 763.869 1677.813
说明
- “规模"以顶点为单位
- 表中数据是社区发现完成需要的时间,单位是s,例如HugeGraph使用RocksDB后端在规模10000的数据集,社区聚合不再变化,需要耗时763.869s
- “*“表示超过10000s未完成
- CW测试是CRUD的综合评估
- 后三者分别是HugeGraph的不同后端,该测试中HugeGraph跟Titan一样,没有通过client,直接对core操作
结论
- HugeGraph在使用Cassandra后端时,性能略优于Titan,随着数据规模的增大,优势越来越明显,数据规模20000时,比Titan快30%
- HugeGraph在使用RocksDB后端时,性能远高于Titan和HugeGraph的Cassandra后端,分别比两者快了6倍和4倍
9 - CHANGELOGS
9.1 - HugeGraph 1.0.0 Release Notes
OLTP API & Client 更新
API/Client 接口更新
- 支持热更新
trace
开关的 /exception/trace
API。 - 支持 Cypher 图查询语言 API。
- 支持通过 Swagger UI 接口来查看提供的 API 列表。
- 将各算法中 ’limit’ 参数的类型由 long 调整为 int。
- 支持在 Client 端跳过 Server 对 HBase 写入数据 (Beta)。
Core & Server
功能更新
- 支持 Java 11 版本。
- 支持 2 个新的 OLTP 算法: adamic-adar 和 resource-allocation。
- 支持 HBase 后端使用哈希 RowKey,并且允许预初始化 HBase 表。
- 支持 Cypher 图查询语言。
- 支持集群 Master 角色的自动管理与故障转移。
- 支持 16 个 OLAP 算法, 包括:LPA, Louvain, PageRank, BetweennessCentrality, RingsDetect等。
- 根据 Apache 基金会对项目的发版要求进行适配,包括 License 合规性、发版流程、代码风格等,支持 Apache 版本发布。
Bug 修复
- 修复无法根据多个 Label 和属性来查询边数据。
- 增加对环路检测算法的最大深度限制。
- 修复 tree() 语句返回结果异常问题。
- 修复批量更新边传入 Id 时的检查异常问题。
- 解决非预期的 Task 状态问题。
- 解决在更新顶点时未清除边缓存的问题。
- 修复 MySQL 后端执行 g.V() 时的错误。
- 修复因为 server-info 无法超时导致的问题。
- 导出了 ConditionP 类型用于 Gremlin 中用户使用。
- 修复 within + Text.contains 查询问题。
- 修复 addIndexLabel/removeIndexLabel 接口的竞争条件问题。
- 限制仅 Admin 允许输出图实例。
- 修复 Profile API 的检查问题。
- 修复在 count().is(0) 查询中 Empty Graph 的问题。
- 修复在异常时无法关闭服务的问题。
- 修复在 Apple M1 系统上的 JNA 报错 UnsatisfiedLinkError 的问题。
- 修复启动 RpcServer 时报 NPE 的问题。
- 修复 ACTION_CLEARED 参数数量的问题。
- 修复 RpcServer 服务启动问题。
- 修复用户传入参数可能得数字转换隐患问题。
- 移除了 Word 分词器依赖。
- 修复 Cassandra 与 MySQL 后端在异常时未优雅关闭迭代器的问题。
配置项更新
- 将配置项
raft.endpoint
从 Graph 作用域移动到 Server 作用域中。
其它修改
- refact(core): enhance schema job module.
- refact(raft): improve raft module & test & install snapshot and add peer.
- refact(core): remove early cycle detection & limit max depth.
- cache: fix assert node.next==empty.
- fix apache license conflicts: jnr-posix and jboss-logging.
- chore: add logo in README & remove outdated log4j version.
- refact(core): improve CachedGraphTransaction perf.
- chore: update CI config & support ci robot & add codeQL SEC-check & graph option.
- refact: ignore security check api & fix some bugs & clean code.
- doc: enhance CONTRIBUTING.md & README.md.
- refact: add checkstyle plugin & clean/format the code.
- refact(core): improve decode string empty bytes & avoid array-construct columns in BackendEntry.
- refact(cassandra): translate ipv4 to ipv6 metrics & update cassandra dependency version.
- chore: use .asf.yaml for apache workflow & replace APPLICATION_JSON with TEXT_PLAIN.
- feat: add system schema store.
- refact(rocksdb): update rocksdb version to 6.22 & improve rocksdb code.
- refact: update mysql scope to test & clean protobuf style/configs.
- chore: upgrade Dockerfile server to 0.12.0 & add editorconfig & improve ci.
- chore: upgrade grpc version.
- feat: support updateIfPresent/updateIfAbsent operation.
- chore: modify abnormal logs & upgrade netty-all to 4.1.44.
- refact: upgrade dependencies & adopt new analyzer & clean code.
- chore: improve .gitignore & update ci configs & add RAT/flatten plugin.
- chore(license): add dependencies-check ci & 3rd-party dependency licenses.
- refact: Shutdown log when shutdown process & fix tx leak & enhance the file path.
- refact: rename package to apache & dependency in all modules (Breaking Change).
- chore: add license checker & update antrun plugin & fix building problem in windows.
- feat: support one-step script for apache release v1.0.0 release.
Computer (OLAP)
Algorithm Changes
- 支持 PageRank 算法。
- 支持 WCC 算法。
- 支持 degree centrality 算法。
- 支持 triangle count 算法。
- 支持 rings detection 算法。
- 支持 LPA 算法。
- 支持 k-core 算法。
- 支持 closeness centrality 算法。
- 支持 betweenness centrality 算法。
- 支持 cluster coefficient 算法。
Platform Changes
- feat: init module computer-core & computer-algorithm & etcd dependency.
- feat: add Id as base type of vertex id.
- feat: init Vertex/Edge/Properties & JsonStructGraphOutput.
- feat: load data from hugegraph server.
- feat: init basic combiner, Bsp4Worker, Bsp4Master.
- feat: init sort & transport interface & basic FileInput/Output Stream.
- feat: init computation & ComputerOutput/Driver interface.
- feat: init Partitioner and HashPartitioner
- feat: init Master/WorkerService module.
- feat: init Heap/LoserTree sorting.
- feat: init rpc module.
- feat: init transport server, client, en/decode, flowControl, heartbeat.
- feat: init DataDirManager & PointerCombiner.
- feat: init aggregator module & add copy() and assign() methods to Value class.
- feat: add startAsync and finishAsync on client side, add onStarted and onFinished on server side.
- feat: init store/sort module.
- feat: link managers in worker sending end.
- feat: implement data receiver of worker.
- feat: implement StreamGraphInput and EntryInput.
- feat: add Sender and Receiver to process compute message.
- feat: add seqfile fromat.
- feat: add ComputeManager.
- feat: add computer-k8s and computer-k8s-operator.
- feat: add startup and make docker image code.
- feat: sort different type of message use different combiner.
- feat: add HDFS output format.
- feat: mount config-map and secret to container.
- feat: support java11.
- feat: support partition concurrent compute.
- refact: abstract computer-api from computer-core.
- refact: optimize data receiving.
- fix: release file descriptor after input and compute.
- doc: add operator deploy readme.
- feat: prepare for Apache release.
Toolchain (loader, tools, hubble)
- 支持 Loader 使用 SQL 格式来选取从关系数据库导入哪些数据。
- 支持 Loader 从 Spark 导入数据(包括 JDBC 方式)。
- 支持 Loader 增加 Flink-CDC 模式。
- 解决 Loader 导入 ORC 格式数据时,报错 NPE。
- 解决 Loader 在 Spark/Flink 模式时未缓存 Schema 的问题。
- 解决 Loader 的 Json 反序列化问题。
- 解决 Loader 的 Jackson 版本冲突与依赖问题。
- 支持 Hubble 高级算法接口的 UI 界面。
- 支持 Hubble 中 Gremlin 语句的高亮格式显示.
- 支持 Hubble 使用 Docker 镜像部署。
- 支持 输出构建日志。
- 解决 Hubble 的端口输入框问题。
- 支持 Apache 项目发版的适配。
Commons (common,rpc)
- 支持 assert-throws 方法返回 Future。
- 增加 Cnm 与 Anm 方法到 CollectionUtil 中。
- 支持 用户自定义的 content-type。
- 支持 Apache 项目发版的适配。
Release Details
更加详细的版本变更信息,可以查看各个子仓库的链接:
9.2 - HugeGraph 0.11 Release Notes
API & Client
功能更新
- 支持梭形相似度算法(hugegraph #671,hugegraph-client #62)
- 支持创建 Schema 时,记录创建的时间(hugegraph #746,hugegraph-client #69)
- 支持 RESTful API 中基于属性的范围查询顶点/边(hugegraph #782,hugegraph-client #73)
- 支持顶点和边的 TTL (hugegraph #794,hugegraph-client #83)
- 统一 RESTful API Server 和 Gremlin Server 的日期格式为字符串(hugegraph #1014,hugegraph-client #82)
- 支持共同邻居,Jaccard 相似度,全部最短路径,带权最短路径和单源最短路径5种遍历算法(hugegraph #936,hugegraph-client #80)
- 支持用户认证和细粒度权限控制(hugegraph #749,hugegraph #985,hugegraph-client #81)
- 支持遍历 API 的顶点计数功能(hugegraph #995,hugegraph-client #84)
- 支持 HTTPS 协议(hugegrap #1036,hugegraph-client #85)
- 支持创建索引时控制是否重建索引(hugegraph #1106,hugegraph-client #91)
- 支持定制的 kout/kneighbor,多点最短路径,最相似 Jaccard 点和模板路径5种遍历算法(hugegraph #1174,hugegraph-client #100,hugegraph-client #106)
内部修改
- 启动 HugeGraphServer 出现异常时快速失败(hugegraph #748)
- 定义 LOADING 模式来加速导入(hugegraph-client #101)
Core
功能更新
- 支持多属性顶点/边的分页查询(hugegraph #759)
- 支持聚合运算的性能优化(hugegraph #813)
- 支持堆外缓存(hugegraph #846)
- 支持属性权限管理(hugegraph #971)
- 支持 MySQL 和 Memory 后端分片,并改进 HBase 分片方法(hugegraph #974)
- 支持基于 Raft 的分布式一致性协议(hugegraph #1020)
- 支持元数据拷贝功能(hugegraph #1024)
- 支持集群的异步任务调度功能(hugegraph #1030)
- 支持发生 OOM 时打印堆信息功能(hugegraph #1093)
- 支持 Raft 状态机更新缓存(hugegraph #1119)
- 支持 Raft 节点管理功能(hugegraph #1137)
- 支持限制查询请求速率的功能(hugegraph #1158)
- 支持顶点/边的属性默认值功能(hugegraph #1182)
- 支持插件化查询加速机制 RamTable(hugegraph #1183)
- 支持索引重建失败时设置为 INVALID 状态(hugegraph #1226)
- 支持 HBase 启用 Kerberos 认证(hugegraph #1234)
BUG修复
- 修复配置权限时 start-hugegraph.sh 的超时问题(hugegraph #761)
- 修复在 studio 执行 gremlin 时的 MySQL 连接失败问题(hugegraph #765)
- 修复 HBase 后端 truncate 时出现的 TableNotFoundException(hugegraph #771)
- 修复限速配置项值未检查的问题(hugegraph #773)
- 修复唯一索引(Unique Index)的返回的异常信息不准确问题(hugegraph #797)
- 修复 RocksDB 后端执行 g.V().hasLabel().count() 时 OOM 问题 (hugegraph-798)
- 修复 traverseByLabel() 分页设置错误问题(hugegraph #805)
- 修复根据 ID 和 SortKeys 更新边属性时误创建边的问题(hugegraph #819)
- 修复部分存储后端的覆盖写问题(hugegraph #820)
- 修复保存执行失败的异步任务时无法取消的问题(hugegraph #827)
- 修复 MySQL 后端在 SSL 模式下无法打开数据库的问题(hugegraph #842)
- 修复索引查询时 offset 无效问题(hugegraph #866)
- 修复 Gremlin 中绝对路径泄露的安全问题(hugegraph #871)
- 修复 reconnectIfNeeded() 方法的 NPE 问题(hugegraph #874)
- 修复 PostgreSQL 的 JDBC_URL 配置没有"/“前缀的问题(hugegraph #891)
- 修复 RocksDB 内存统计问题(hugegraph #937)
- 修复环路检测的两点成环无法检测的问题(hugegraph #939)
- 修复梭形算法计算结束后没有清理计数的问题(hugegraph #947)
- 修复 gremlin-console 无法工作的问题(hugegraph #1027)
- 修复限制数目的按条件过滤邻接边问题(hugegraph #1057)
- 修复 MySQL 执行 SQL 时的 auto-commit 问题(hugegraph #1064)
- 修复通过两个索引查询时发生超时 80w 限制的问题(hugegraph #1088)
- 修复范围索引检查规则错误(hugegraph #1090)
- 修复删除残留索引的错误(hugegraph #1101)
- 修复当前线程为 task-worker 时关闭事务卡住的问题(hugegraph #1111)
- 修复最短路径查询出现 NoSuchElementException 的问题(hugegraph #1116)
- 修复异步任务有时提交两次的问题(hugegraph #1130)
- 修复值很小的 date 反序列化的问题(hugegraph #1152)
- 修复遍历算法未检查起点或者终点是否存在的问题(hugegraph #1156)
- 修复 bin/start-hugegraph.sh 参数解析错误的问题(hugegraph #1178)
- 修复 gremlin-console 运行时的 log4j 错误信息的问题(hugegraph #1229)
内部修改
- 延迟检查非空属性(hugegraph #756)
- 为存储后端增加查看集群节点信息的功能 (hugegraph #821)
- 为 RocksDB 后端增加 compaction 高级配置项(hugegraph #825)
- 增加 vertex.check_adjacent_vertex_exist 配置项(hugegraph #837)
- 检查主键属性不允许为空(hugegraph #847)
- 增加图名字的合法性检查(hugegraph #854)
- 增加对非预期的 SysProp 的查询(hugegraph #862)
- 使用 disableTableAsync 加速 HBase 后端的数据清除(hugegraph #868)
- 允许 Gremlin 环境触发系统异步任务(hugegraph #892)
- 编码字符类型索引中的类型 ID(hugegraph #894)
- 安全模块允许 Cassandra 在执行 CQL 时按需创建线程(hugegraph #896)
- 将 GremlinServer 的默认通道设置为 WsAndHttpChannelizer(hugegraph #903)
- 将 Direction 和遍历算法的类导出到 Gremlin 环境(hugegraph #904)
- 增加顶点属性缓存限制(hugegraph #941,hugegraph #942)
- 优化列表属性的读(hugegraph #943)
- 增加缓存的 L1 和 L2 配置(hugegraph #945)
- 优化 EdgeId.asString() 方法(hugegraph #946)
- 优化当顶点没有属性时跳过后端存储查询(hugegraph #951)
- 创建名字相同但属性不同的元数据时抛出 ExistedException(hugegraph #1009)
- 查询顶点和边后按需关闭事务(hugegraph #1039)
- 当图关闭时清空缓存(hugegraph #1078)
- 关闭图时加锁避免竞争问题(hugegraph #1104)
- 优化顶点和边的删除效率,当提供 Label+ID 删除时免去查询(hugegraph #1150)
- 使用 IntObjectMap 优化元数据缓存效率(hugegraph #1185)
- 使用单个 Raft 节点管理目前的三个 store(hugegraph #1187)
- 在重建索引时提前释放索引删除的锁(hugegraph #1193)
- 在压缩和解压缩异步任务的结果时,使用 LZ4 替代 Gzip(hugegraph #1198)
- 实现 RocksDB 删除 CF 操作的排他性来避免竞争(hugegraph #1202)
- 修改 CSV reporter 的输出目录,并默认设置为不输出(hugegraph #1233)
其它
- cherry-pick 0.10.4 版本的 bug 修复代码(hugegraph #785,hugegraph #1047)
- Jackson 升级到 2.10.2 版本(hugegraph #859)
- Thanks 信息中增加对 Titan 的感谢(hugegraph #906)
- 适配 TinkerPop 测试(hugegraph #1048)
- 修改允许输出的日志最低等级为 TRACE(hugegraph #1050)
- 增加 IDEA 的格式配置文件(hugegraph #1060)
- 修复 Travis CI 太多错误信息的问题(hugegraph #1098)
Loader
功能更新
- 支持读取 Hadoop 配置文件(hugegraph-loader #105)
- 支持指定 Date 属性的时区(hugegraph-loader #107)
- 支持从 ORC 压缩文件导入数据(hugegraph-loader #113)
- 支持单条边插入时设置是否检查顶点(hugegraph-loader #117)
- 支持从 Snappy-raw 压缩文件导入数据(hugegraph-loader #119)
- 支持导入映射文件 2.0 版本(hugegraph-loader #121)
- 增加一个将 utf8-bom 转换为 utf8 的命令行工具(hugegraph-loader #128)
- 支持导入任务开始前清理元数据信息的功能(hugegraph-loader #140)
- 支持 id 列作为属性存储(hugegraph-loader #143)
- 支持导入任务配置 username(hugegraph-loader #146)
- 支持从 Parquet 文件导入数据(hugegraph-loader #153)
- 支持指定读取文件的最大行数(hugegraph-loader #159)
- 支持 HTTPS 协议(hugegraph-loader #161)
- 支持时间戳作为日期格式(hugegraph-loader #164)
BUG修复
- 修复行的 retainAll() 方法没有修改 names 和 values 数组(hugegraph-loader #110)
- 修复 JSON 文件重新加载时的 NPE 问题(hugegraph-loader #112)
内部修改
- 只打印一次插入错误信息,以避免过多的错误信息(hugegraph-loader #118)
- 拆分批量插入和单条插入的线程(hugegraph-loader #120)
- CSV 的解析器改为 SimpleFlatMapper(hugegraph-loader #124)
- 编码主键中的数字和日期字段(hugegraph-loader #136)
- 确保主键列合法或者存在映射(hugegraph-loader #141)
- 跳过主键属性全部为空的顶点(hugegraph-loader #166)
- 在导入任务开始前设置为 LOADING 模式,并在导入完成后恢复原来模式(hugegraph-loader #169)
- 改进停止导入任务的实现(hugegraph-loader #170)
Tools
功能更新
- 支持 Memory 后端的备份功能 (hugegraph-tools #53)
- 支持 HTTPS 协议(hugegraph-tools #58)
- 支持 migrate 子命令配置用户名和密码(hugegraph-tools #61)
- 支持备份顶点和边时指定类型和过滤属性信息(hugegraph-tools #63)
BUG修复
- 修复 dump 命令的 NPE 问题(hugegraph-tools #49)
内部修改
- 在 backup/dump 之前清除分片文件(hugegraph-tools #53)
- 改进 HugeGraph-tools 的报错信息(hugegraph-tools #67)
- 改进 migrate 子命令,删除掉不支持的子配置(hugegraph-tools #68)
9.3 - HugeGraph 0.12 Release Notes
API & Client
接口更新
- 支持 https + auth 模式连接图服务 (hugegraph-client #109 #110)
- 统一 kout/kneighbor 等 OLTP 接口的参数命名及默认值(hugegraph-client #122 #123)
- 支持 RESTful 接口利用 P.textcontains() 进行属性全文检索(hugegraph #1312)
- 增加 graph_read_mode API 接口,以切换 OLTP、OLAP 读模式(hugegraph #1332)
- 支持 list/set 类型的聚合属性 aggregate property(hugegraph #1332)
- 权限接口增加 METRICS 资源类型(hugegraph #1355、hugegraph-client #114)
- 权限接口增加 SCHEMA 资源类型(hugegraph #1362、hugegraph-client #117)
- 增加手动 compact API 接口,支持 rocksdb/cassandra/hbase 后端(hugegraph #1378)
- 权限接口增加 login/logout API,支持颁发或回收 Token(hugegraph #1500、hugegraph-client #125)
- 权限接口增加 project API(hugegraph #1504、hugegraph-client #127)
- 增加 OLAP 回写接口,支持 cassandra/rocksdb 后端(hugegraph #1506、hugegraph-client #129)
- 增加返回一个图的所有 Schema 的 API 接口(hugegraph #1567、hugegraph-client #134)
- 变更 property key 创建与更新 API 的 HTTP 返回码为 202(hugegraph #1584)
- 增强 Text.contains() 支持3种格式:“word”、"(word)"、"(word1|word2|word3)"(hugegraph #1652)
- 统一了属性中特殊字符的行为(hugegraph #1670 #1684)
- 支持动态创建图实例、克隆图实例、删除图实例(hugegraph-client #135)
其它修改
- 修复在恢复 index label 时 IndexLabelV56 id 丢失的问题(hugegraph-client #118)
- 为 Edge 类增加 name() 方法(hugegraph-client #121)
Core & Server
功能更新
- 支持动态创建图实例(hugegraph #1065)
- 支持通过 Gremlin 调用 OLTP 算法(hugegraph #1289)
- 支持多集群使用同一个图权限服务,以共享权限信息(hugegraph #1350)
- 支持跨多节点的 Cache 缓存同步(hugegraph #1357)
- 支持 OLTP 算法使用原生集合以降低 GC 压力提升性能(hugegraph #1409)
- 支持对新增的 Raft 节点打快照或恢复快照(hugegraph #1439)
- 支持对集合属性建立二级索引 Secondary Index(hugegraph #1474)
- 支持审计日志,及其压缩、限速等功能(hugegraph #1492 #1493)
- 支持 OLTP 算法使用高性能并行无锁原生集合以提升性能(hugegraph #1552)
BUG修复
- 修复带权最短路径算法(weighted shortest path)NPE问题 (hugegraph #1250)
- 增加 Raft 相关的安全操作白名单(hugegraph #1257)
- 修复 RocksDB 实例未正确关闭的问题(hugegraph #1264)
- 在清空数据 truncate 操作之后,显示的发起写快照 Raft Snapshot(hugegraph #1275)
- 修复 Raft Leader 在收到 Follower 转发请求时未更新缓存的问题(hugegraph #1279)
- 修复带权最短路径算法(weighted shortest path)结果不稳定的问题(hugegraph #1280)
- 修复 rays 算法 limit 参数不生效问题(hugegraph #1284)
- 修复 neighborrank 算法 capacity 参数未检查的问题(hugegraph #1290)
- 修复 PostgreSQL 因为不存在与用户同名的数据库而初始化失败的问题(hugegraph #1293)
- 修复 HBase 后端当启用 Kerberos 时初始化失败的问题(hugegraph #1294)
- 修复 HBase/RocksDB 后端 shard 结束判断错误问题(hugegraph #1306)
- 修复带权最短路径算法(weighted shortest path)未检查目标顶点存在的问题(hugegraph #1307)
- 修复 personalrank/neighborrank 算法中非 String 类型 id 的问题(hugegraph #1310)
- 检查必须是 master 节点才允许调度 gremlin job(hugegraph #1314)
- 修复 g.V().hasLabel().limit(n) 因为索引覆盖导致的部分结果不准确问题(hugegraph #1316)
- 修复 jaccardsimilarity 算法当并集为空时报 NaN 错误的问题(hugegraph #1324)
- 修复 Raft Follower 节点操作 Schema 多节点之间数据不同步问题(hugegraph #1325)
- 修复因为 tx 未关闭导致的 TTL 不生效问题(hugegraph #1330)
- 修复 gremlin job 的执行结果大于 Cassandra 限制但小于任务限制时的异常处理(hugegraph #1334)
- 检查权限接口 auth-delete 和 role-get API 操作时图必须存在(hugegraph #1338)
- 修复异步任务结果中包含 path/tree 时系列化不正常的问题(hugegraph #1351)
- 修复初始化 admin 用户时的 NPE 问题(hugegraph #1360)
- 修复异步任务原子性操作问题,确保 update/get fields 及 re-schedule 的原子性(hugegraph #1361)
- 修复权限 NONE 资源类型的问题(hugegraph #1362)
- 修复启用权限后,truncate 操作报错 SecurityException 及管理员信息丢失问题(hugegraph #1365)
- 修复启用权限后,解析数据忽略了权限异常的问题(hugegraph #1380)
- 修复 AuthManager 在初始化时会尝试连接其它节点的问题(hugegraph #1381)
- 修复特定的 shard 信息导致 base64 解码错误的问题(hugegraph #1383)
- 修复启用权限后,使用 consistent-hash LB 在校验权限时,creator 为空的问题(hugegraph #1385)
- 改进权限中 VAR 资源不再依赖于 VERTEX 资源(hugegraph #1386)
- 规范启用权限后,Schema 操作仅依赖具体的资源(hugegraph #1387)
- 规范启用权限后,部分操作由依赖 STATUS 资源改为依赖 ANY 资源(hugegraph #1391)
- 规范启用权限后,禁止初始化管理员密码为空(hugegraph #1400)
- 检查创建用户时 username/password 不允许为空(hugegraph #1402)
- 修复更新 Label 时,PrimaryKey 或 SortKey 被设置为可空属性的问题(hugegraph #1406)
- 修复 ScyllaDB 丢失分页结果问题(hugegraph #1407)
- 修复带权最短路径算法(weighted shortest path)权重属性强制转换为 double 的问题(hugegraph #1432)
- 统一 OLTP 算法中的 degree 参数命名(hugegraph #1433)
- 修复 fusiformsimilarity 算法当 similars 为空的时候返回所有的顶点问题(hugegraph #1434)
- 改进 paths 算法,当起始点与目标点相同时应该返回空路径(hugegraph #1435)
- 修改 kout/kneighbor 的 limit 参数默认值 10 为 10000000(hugegraph #1436)
- 修复分页信息中的 ‘+’ 被 URL 编码为空格的问题(hugegraph #1437)
- 改进边更新接口的错误提示信息(hugegraph #1443)
- 修复 kout 算法 degree 未在所有 label 范围生效的问题(hugegraph #1459)
- 改进 kneighbor/kout 算法,起始点不允许出现在结果集中(hugegraph #1459 #1463)
- 统一 kout/kneighbor 的 Get 和 Post 版本行为(hugegraph #1470)
- 改进创建边时顶点类型不匹配的错误提示信息(hugegraph #1477)
- 修复 Range Index 的残留索引问题(hugegraph #1498)
- 修复权限操作未失效缓存的问题(hugegraph #1528)
- 修复 sameneighbor 的 limit 参数默认值 10 为 10000000(hugegraph #1530)
- 修复 clear API 不应该所有后端都调用 create snapshot 的问题(hugegraph #1532)
- 修复当 loading 模式时创建 Index Label 阻塞问题(hugegraph #1548)
- 修复增加图到 project 或从 project 移除图的问题(hugegraph #1562)
- 改进权限操作的一些错误提示信息(hugegraph #1563)
- 支持浮点属性设置为 Infinity/NaN 的值(hugegraph #1578)
- 修复 Raft 启用 safe_read 时的 quorum read 问题(hugegraph #1618)
- 修复 token 过期时间配置的单位问题(hugegraph #1625)
- 修复 MySQL Statement 资源泄露问题(hugegraph #1627)
- 修复竞争条件下 Schema.getIndexLabel 获取不到数据的问题(hugegraph #1629)
- 修复 HugeVertex4Insert 无法系列化问题(hugegraph #1630)
- 修复 MySQL count Statement 未关闭问题(hugegraph #1640)
- 修复当删除 Index Label 异常时,导致状态不同步问题(hugegraph #1642)
- 修复 MySQL 执行 gremlin timeout 导致的 statement 未关闭问题(hugegraph #1643)
- 改进 Search Index 以兼容特殊 Unicode 字符:\u0000 to \u0003(hugegraph #1659)
- 修复 #1659 引入的 Char 未转化为 String 的问题(hugegraph #1664)
- 修复 has() + within() 查询时结果异常问题(hugegraph #1680)
- 升级 Log4j 版本到 2.17 以修复安全漏洞(hugegraph #1686 #1698 #1702)
- 修复 HBase 后端 shard scan 中 startkey 包含空串时 NPE 问题(hugegraph #1691)
- 修复 paths 算法在深层环路遍历时性能下降问题 (hugegraph #1694)
- 改进 personalrank 算法的参数默认值及错误检查(hugegraph #1695)
- 修复 RESTful 接口 P.within 条件不生效问题(hugegraph #1704)
- 修复启用权限时无法动态创建图的问题(hugegraph #1708)
配置项修改:
- 共享 SSL 相关配置项命名(hugegraph #1260)
- 支持 RocksDB 配置项 rocksdb.level_compaction_dynamic_level_bytes(hugegraph #1262)
- 去除 RESFful Server 服务协议配置项 restserver.protocol,自动提取 URL 中的 Schema(hugegraph #1272)
- 增加 PostgreSQL 配置项 jdbc.postgresql.connect_database(hugegraph #1293)
- 增加针对顶点主键是否编码的配置项 vertex.encode_primary_key_number(hugegraph #1323)
- 增加针对聚合查询是否启用索引优化的配置项 query.optimize_aggregate_by_index(hugegraph #1549)
- 修改 cache_type 的默认值 l1 为 l2(hugegraph #1681)
- 增加 JDBC 强制重连配置项 jdbc.forced_auto_reconnect(hugegraph #1710)
其它修改
- 增加默认的 SSL Certificate 文件(hugegraph #1254)
- OLTP 并行请求共享线程池,而非每个请求使用单独的线程池(hugegraph #1258)
- 修复 Example 的问题(hugegraph #1308)
- 使用 jraft 版本 1.3.5(hugegraph #1313)
- 如果启用了 Raft 模式时,关闭 RocksDB 的 WAL(hugegraph #1318)
- 使用 TarLz4Util 来提升快照 Snapshot 压缩的性能(hugegraph #1336)
- 升级存储的版本号(store version),因为 property key 增加了 read frequency(hugegraph #1341)
- 顶点/边 vertex/edge 的 Get API 使用 queryVertex/queryEdge 方法来替代 iterator 方法(hugegraph #1345)
- 支持 BFS 优化的多度查询(hugegraph #1359)
- 改进 RocksDB deleteRange() 带来的查询性能问题(hugegraph #1375)
- 修复 travis-ci cannot find symbol Namifiable 问题(hugegraph #1376)
- 确保 RocksDB 快照的磁盘与 data path 指定的一致(hugegraph #1392)
- 修复 MacOS 空闲内存 free_memory 计算不准确问题(hugegraph #1396)
- 增加 Raft onBusy 回调来配合限速(hugegraph #1401)
- 升级 netty-all 版本 4.1.13.Final 到 4.1.42.Final(hugegraph #1403)
- 支持 TaskScheduler 暂停当设置为 loading 模式时(hugegraph #1414)
- 修复 raft-tools 脚本的问题(hugegraph #1416)
- 修复 license params 问题(hugegraph #1420)
- 提升写权限日志的性能,通过 batch flush & async write 方式改进(hugegraph #1448)
- 增加 MySQL 连接 URL 的日志记录(hugegraph #1451)
- 提升用户信息校验性能(hugegraph# 1460)
- 修复 TTL 因为起始时间问题导致的错误(hugegraph #1478)
- 支持日志配置的热加载及对审计日志的压缩(hugegraph #1492)
- 支持针对用户级别的审计日志的限速(hugegraph #1493)
- 缓存 RamCache 支持用户自定义的过期时间(hugegraph #1494)
- 在 auth client 端缓存 login role 以避免重复的 RPC 调用(hugegraph #1507)
- 修复 IdSet.contains() 未复写 AbstractCollection.contains() 问题(hugegraph #1511)
- 修复当 commitPartOfEdgeDeletions() 失败时,未回滚 rollback 的问题(hugegraph #1513)
- 提升 Cache metrics 性能(hugegraph #1515)
- 当发生 license 操作错误时,增加打印异常日志(hugegraph #1522)
- 改进 SimilarsMap 实现(hugegraph #1523)
- 使用 tokenless 方式来更新 coverage(hugegraph #1529)
- 改进 project update 接口的代码(hugegraph #1537)
- 允许从 option() 访问 GRAPH_STORE(hugegraph #1546)
- 优化 kout/kneighbor 的 count 查询以避免拷贝集合(hugegraph #1550)
- 优化 shortestpath 遍历方式,以数据量少的一端优先遍历(hugegraph #1569)
- 完善 rocksdb.data_disks 配置项的 allowed keys 提示信息(hugegraph #1585)
- 为 number id 优化 OLTP 遍历中的 id2code 方法性能(hugegraph #1623)
- 优化 HugeElement.getProperties() 返回 Collection<Property>(hugegraph #1624)
- 增加 APACHE PROPOSAL 文件(hugegraph #1644)
- 改进 close tx 的流程(hugegraph #1655)
- 当 reset() 时为 MySQL close 捕获所有类型异常(hugegraph #1661)
- 改进 OLAP property 模块代码(hugegraph #1675)
- 改进查询模块的执行性能(hugegraph #1711)
Loader
- 支持导入 Parquet 格式文件(hugegraph-loader #174)
- 支持 HDFS Kerberos 权限验证(hugegraph-loader #176)
- 支持 HTTPS 协议连接到服务端导入数据(hugegraph-loader #183)
- 修复 trust store file 路径问题(hugegraph-loader #186)
- 处理 loading mode 重置的异常(hugegraph-loader #187)
- 增加在插入数据时对非空属性的检查(hugegraph-loader #190)
- 修复客户端与服务端时区不同导致的时间判断问题(hugegraph-loader #192)
- 优化数据解析性能(hugegraph-loader #194)
- 当用户指定了文件头时,检查其必须不为空(hugegraph-loader #195)
- 修复示例程序中 MySQL struct.json 格式问题(hugegraph-loader #198)
- 修复顶点边导入速度不精确的问题(hugegraph-loader #200 #205)
- 当导入启用 check-vertex 时,确保先导入顶点再导入边(hugegraph-loader #206)
- 修复边 Json 数据导入格式不统一时数组溢出的问题(hugegraph-loader #211)
- 修复因边 mapping 文件不存在导致的 NPE 问题(hugegraph-loader #213)
- 修复读取时间可能出现负数的问题(hugegraph-loader #215)
- 改进目录文件的日志打印(hugegraph-loader #223)
- 改进 loader 的的 Schema 处理流程(hugegraph-loader #230)
Tools
- 支持 HTTPS 协议(hugegraph-tools #71)
- 移除 –protocol 参数,直接从URL中自动提取(hugegraph-tools #72)
- 支持将数据 dump 到 HDFS 文件系统(hugegraph-tools #73)
- 修复 trust store file 路径问题(hugegraph-tools #75)
- 支持权限信息的备份恢复(hugegraph-tools #76)
- 支持无参数的 Printer 打印(hugegraph-tools #79)
- 修复 MacOS free_memory 计算问题(hugegraph-tools #82)
- 支持备份恢复时指定线程数hugegraph-tools #83)
- 支持动态创建图、克隆图、删除图等命令(hugegraph-tools #95)
9.4 - HugeGraph 0.10 Release Notes
API & Client
功能更新
- 支持 HugeGraphServer 服务端内存紧张时返回错误拒绝请求 (hugegraph #476)
- 支持 API 白名单和 HugeGraphServer GC 频率控制功能 (hugegraph #522)
- 支持 Rings API 的 source_in_ring 参数 (hugegraph #528,hugegraph-client #48)
- 支持批量按策略更新属性接口 (hugegraph #493,hugegraph-client #46)
- 支持 Shard Index 前缀与范围检索索引 (hugegraph #574,hugegraph-client #56)
- 支持顶点的 UUID ID 类型 (hugegraph #618,hugegraph-client #59)
- 支持唯一性约束索引(Unique Index) (hugegraph #636,hugegraph-client #60)
- 支持 API 请求超时功能 (hugegraph #674)
- 支持根据名称列表查询 schema (hugegraph #686,hugegraph-client #63)
- 支持按分页方式获取异步任务 (hugegraph #720)
内部修改
- 保持 traverser 的参数与 server 端一致 (hugegraph-client #44)
- 支持在 Shard 内使用分页方式遍历顶点或者边的方法 (hugegraph-client #47)
- 支持 Gremlin 查询结果持有 GraphManager (hugegraph-client #49)
- 改进 RestClient 的连接参数 (hugegraph-client #52)
- 增加 Date 类型属性的测试 (hugegraph-client #55)
- 适配 HugeGremlinException 异常 (hugegraph-client #57)
- 增加新功能的版本匹配检查 (hugegraph-client #66)
- 适配 UUID 的序列化 (hugegraph-client #67)
Core
功能更新
- 支持 PostgreSQL 和 CockroachDB 存储后端 (hugegraph #484)
- 支持负数索引 (hugegraph #513)
- 支持边的 Vertex + SortKeys 的前缀范围查询 (hugegraph #574)
- 支持顶点的邻接边按分页方式查询 (hugegraph #659)
- 禁止通过 Gremlin 进行敏感操作 (hugegraph #176)
- 支持 Lic 校验功能 (hugegraph #645)
- 支持 Search Index 查询结果按匹配度排序的功能 (hugegraph #653)
- 升级 tinkerpop 至版本 3.4.3 (hugegraph #648)
BUG修复
- 修复按分页方式查询边时剩余数目(remaining count)错误 (hugegraph #515)
- 修复清空后端时边缓存未清空的问题 (hugegraph #488)
- 修复无法插入 List
类型的属性问题 (hugegraph #534) - 修复 PostgreSQL 后端的 existDatabase(), clearBackend() 和 rollback()功能 (hugegraph #531)
- 修复程序关闭时 HugeGraphServer 和 GremlinServer 残留问题 (hugegraph #554)
- 修复在 LockTable 中重复抓锁的问题 (hugegraph #566)
- 修复从 Edge 中获取的 Vertex 没有属性的问题 (hugegraph #604)
- 修复交叉关闭 RocksDB 的连接池问题 (hugegraph #598)
- 修复在超级点查询时 limit 失效问题 (hugegraph #607)
- 修复使用 Equal 条件和分页的情况下查询 Range Index 只返回第一页的问题 (hugegraph #614)
- 修复查询 limit 在删除部分数据后失效的问题 (hugegraph #610)
- 修复 Example1 的查询错误 (hugegraph #638)
- 修复 HBase 的批量提交部分错误问题 (hugegraph #634)
- 修复索引搜索时 compareNumber() 方法的空指针问题 (hugegraph #629)
- 修复更新属性值为已经删除的顶点或边的属性时失败问题 (hugegraph #679)
- 修复 system 类型残留索引无法清除问题 (hugegraph #675)
- 修复 HBase 在 Metrics 信息中的单位问题 (hugegraph #713)
- 修复存储后端未初始化问题 (hugegraph #708)
- 修复按 Label 删除边时导致的 IN 边残留问题 (hugegraph #727)
- 修复 init-store 会生成多份 backend_info 问题 (hugegraph #723)
内部修改
- 抑制因 PostgreSQL 后端 database 不存在时的报警信息 (hugegraph #527)
- 删除 PostgreSQL 后端的无用配置项 (hugegraph #533)
- 改进错误信息中的 HugeType 为易读字符串 (hugegraph #546)
- 增加 jdbc.storage_engine 配置项指定存储引擎 (hugegraph #555)
- 增加使用后端链接时按需重连功能 (hugegraph #562)
- 避免打印空的查询条件 (hugegraph #583)
- 缩减 Variable 的字符串长度 (hugegraph #581)
- 增加 RocksDB 后端的 cache 配置项 (hugegraph #567)
- 改进异步任务的异常信息 (hugegraph #596)
- 将 Range Index 拆分成 INT,LONG,FLOAT,DOUBLE 四个表存储 (hugegraph #574)
- 改进顶点和边 API 的 Metrics 名字 (hugegraph #631)
- 增加 G1GC 和 GC Log 的配置项 (hugegraph #616)
- 拆分顶点和边的 Label Index 表 (hugegraph #635)
- 减少顶点和边的属性存储空间 (hugegraph #650)
- 支持对 Secondary Index 和 Primary Key 中的数字进行编码 (hugegraph #676)
- 减少顶点和边的 ID 存储空间 (hugegraph #661)
- 支持 Cassandra 后端存储的二进制序列化存储 (hugegraph #680)
- 放松对最小内存的限制 (hugegraph #689)
- 修复 RocksDB 后端批量写时的 Invalid column family 问题 (hugegraph #701)
- 更新异步任务状态时删除残留索引 (hugegraph #719)
- 删除 ScyllaDB 的 Label Index 表 (hugegraph #717)
- 启动时使用多线程方式打开 RocksDB 后端存储多个数据目录 (hugegraph #721)
- RocksDB 版本从 v5.17.2 升级至 v6.3.6 (hugegraph #722)
其它
- 增加 API tests 到 codecov 统计中 (hugegraph #711)
- 改进配置文件的默认配置项 (hugegraph #575)
- 改进 README 中的致谢信息 (hugegraph #548)
Loader
功能更新
- 支持 JSON 数据源的 selected 字段 (hugegraph-loader #62)
- 支持定制化 List 元素之间的分隔符 (hugegraph-loader #66)
- 支持值映射 (hugegraph-loader #67)
- 支持通过文件后缀过滤文件 (hugegraph-loader #82)
- 支持对导入进度进行记录和断点续传 (hugegraph-loader #70,hugegraph-loader #87)
- 支持从不同的关系型数据库中读取 Header 信息 (hugegraph-loader #79)
- 支持属性为 Unsigned Long 类型值 (hugegraph-loader #91)
- 支持顶点的 UUID ID 类型 (hugegraph-loader #98)
- 支持按照策略批量更新属性 (hugegraph-loader #97)
BUG修复
- 修复 nullable key 在 mapping field 不工作的问题 (hugegraph-loader #64)
- 修复 Parse Exception 无法捕获的问题 (hugegraph-loader #74)
- 修复在等待异步任务完成时获取信号量数目错误的问题 (hugegraph-loader #86)
- 修复空表时 hasNext() 返回 true 的问题 (hugegraph-loader #90)
- 修复布尔值解析错误问题 (hugegraph-loader #92)
内部修改
- 增加 HTTP 连接参数 (hugegraph-loader #81)
- 改进导入完成的总结信息 (hugegraph-loader #80)
- 改进一行数据缺少列或者有多余列的处理逻辑 (hugegraph-loader #93)
Tools
功能更新
- 支持 0.8 版本 server 备份的数据恢复至 0.9 版本的 server 中 (hugegraph-tools #34)
- 增加 timeout 全局参数 (hugegraph-tools #44)
- 增加 migrate 子命令支持迁移图 (hugegraph-tools #45)
BUG修复
- 修复 dump 命令不支持 split size 参数的问题 (hugegraph-tools #32)
内部修改
- 删除 Hadoop 对 Jersey 1.19的依赖 (hugegraph-tools #31)
- 优化子命令在 help 信息中的排序 (hugegraph-tools #37)
- 使用 log4j2 清除 log4j 的警告信息 (hugegraph-tools #39)
9.5 - HugeGraph 0.9 Release Notes
API & Client
功能更新
- 增加 personal rank API 和 neighbor rank API (hugegraph #274)
- Shortest path API 增加 skip_degree 参数跳过超级点(hugegraph #433,hugegraph-client #42)
- vertex/edge 的 scan API 支持分页机制 (hugegraph #428,hugegraph-client #35)
- VertexAPI 使用简化的属性序列化器 (hugegraph #332,hugegraph-client #37)
- 增加 customized paths API 和 customized crosspoints API (hugegraph #306,hugegraph-client #40)
- 在 server 端所有线程忙时返回503错误 (hugegraph #343)
- 保持 API 的 depth 和 degree 参数一致 (hugegraph #252,hugegraph-client #30)
BUG修复
- 增加属性的时候验证 Date 而非 Timestamp 的值 (hugegraph-client #26)
内部修改
- RestClient 支持重用连接 (hugegraph-client #33)
- 使用 JsonUtil 替换冗余的 ObjectMapper (hugegraph-client #41)
- Edge 直接引用 Vertex 使得批量插入更友好 (hugegraph-client #29)
- 使用 JaCoCo 替换 Cobertura 统计代码覆盖率 (hugegraph-client #39)
- 改进 Shard 反序列化机制 (hugegraph-client #34)
Core
功能更新
- 支持 Cassandra 的 NetworkTopologyStrategy (hugegraph #448)
- 元数据删除和索引重建使用分页机制 (hugegraph #417)
- 支持将 HugeGraphServer 作为系统服务 (hugegraph #170)
- 单一索引查询支持分页机制 (hugegraph #328)
- 在初始化图库时支持定制化插件 (hugegraph #364)
- 为HBase后端增加 hbase.zookeeper.znode.parent 配置项 (hugegraph #333)
- 支持异步 Gremlin 任务的进度更新 (hugegraph #325)
- 使用异步任务的方式删除残留索引 (hugegraph #285)
- 支持按 sortKeys 范围查找功能 (hugegraph #271)
BUG修复
- 修复二级索引删除时 Cassandra 后端的 batch 超过65535限制的问题 (hugegraph #386)
- 修复 RocksDB 磁盘利用率的 metrics 不正确问题 (hugegraph #326)
- 修复异步索引删除错误修复 (hugegraph #336)
- 修复 BackendSessionPool.close() 的竞争条件问题 (hugegraph #330)
- 修复保留的系统 ID 不工作问题 (hugegraph #315)
- 修复 cache 的 metrics 信息丢失问题 (hugegraph #321)
- 修复使用 hasId() 按 id 查询顶点时不支持数字 id 问题 (hugegraph #302)
- 修复重建索引时的 80w 限制问题和 Cassandra 后端的 batch 65535问题 (hugegraph #292)
- 修复残留索引删除无法处理未展开(none-flatten)查询的问题 (hugegraph #281)
内部修改
- 迭代器变量统一命名为 ‘iter’(hugegraph #438)
- 增加 PageState.page() 方法统一获取分页信息接口 (hugegraph #429)
- 为基于 mapdb 的内存版后端调整代码结构,增加测试用例 (hugegraph #357)
- 支持代码覆盖率统计 (hugegraph #376)
- 设置 tx capacity 的下限为 COMMIT_BATCH(默认为500) (hugegraph #379)
- 增加 shutdown hook 来自动关闭线程池 (hugegraph #355)
- PerfExample 的统计时间排除环境初始化时间 (hugegraph #329)
- 改进 BinarySerializer 中的 schema 序列化 (hugegraph #316)
- 避免对 primary key 的属性创建多余的索引 (hugegraph #317)
- 限制 Gremlin 异步任务的名字小于256字节 (hugegraph #313)
- 使用 multi-get 优化 HBase 后端的按 id 查询 (hugegraph #279)
- 支持更多的日期数据类型 (hugegraph #274)
- 修改 Cassandra 和 HBase 的 port 范围为(1,65535) (hugegraph #263)
其它
- 增加 travis API 测试 (hugegraph #299)
- 删除 rest-server.properties 中的 GremlinServer 相关的默认配置项 (hugegraph #290)
Loader
功能更新
- 支持从 HDFS 和 关系型数据库导入数据 (hugegraph-loader #14)
- 支持传递权限 token 参数(hugegraph-loader #46)
- 支持通过 regex 指定要跳过的行 (hugegraph-loader #43)
- 支持导入 TEXT 文件时的 List/Set 属性(hugegraph-loader #38)
- 支持自定义的日期格式 (hugegraph-loader #28)
- 支持从指定目录导入数据 (hugegraph-loader #33)
- 支持忽略最后多余的列或者 null 值的列 (hugegraph-loader #23)
BUG修复
- 修复 Example 问题(hugegraph-loader #57)
- 修复当 vertex 是 customized ID 策略时边解析问题(hugegraph-loader #24)
内部修改
- URL regex 改进 (hugegraph-loader #47)
Tools
功能更新
- 支持海量数据备份和恢复到本地和 HDFS,并支持压缩 (hugegraph-tools #21)
- 支持异步任务取消和清理功能 (hugegraph-tools #20)
- 改进 graph-clear 命令的提示信息 (hugegraph-tools #23)
BUG修复
- 修复 restore 命令总是使用 ‘hugegraph’ 作为目标图的问题,支持指定图 (hugegraph-tools #26)
9.6 - HugeGraph 0.8 Release Notes
API & Client
功能更新
- 服务端增加 rays 和 rings 的 RESTful API(hugegraph #45)
- 使创建 IndexLabel 返回异步任务(hugegraph #95,hugegraph-client #9)
- 客户端增加恢复模式相关的 API(hugegraph-client #10)
- 让 task-list API 不返回 task_input 和 task_result(hugegraph #143)
- 增加取消异步任务的API(hugegraph #167,hugegraph-client #15)
- 增加获取后端 metrics 的 API(hugegraph #155)
BUG修复
- 分页获取时最后一页的 page 应该为 null 而非 “null”(hugegraph #168)
- 分页迭代获取服务端已经没有下一页了应该停止获取(hugegraph-client #16)
- 添加顶点使用自定义 Number Id 时报类型无法转换(hugegraph-client #21)
内部修改
- 增加持续集成测试(hugegraph-client #19)
Core
功能更新
- 取消异步任务通过 label 查询时 80w 的限制(hugegraph #93)
- 允许 cardinality 为 set 时传入 Json List 形式的属性值(hugegraph #109)
- 支持在恢复模式和合并模式来恢复图(hugegraph #114)
- RocksDB 后端支持多个图指定为同一个存储目录(hugegraph #123)
- 支持用户自定义权限认证器(hugegraph-loader #133)
- 当服务重启后重新开始未完成的任务(hugegraph #188)
- 当顶点的 Id 策略为自定义时,检查是否已存在相同 Id 的顶点(hugegraph #189)
BUG修复
- 增加对 HasContainer 的 predicate 不为 null 的检查(hugegraph #16)
- RocksDB 后端由于数据目录和日志目录错误导致 init-store 失败(hugegraph #25)
- 启动 hugegraph 时由于 logs 目录不存在导致提示超时但实际可访问(hugegraph #38)
- ScyllaDB 后端遗漏注册顶点表(hugegraph #47)
- 使用 hasLabel 查询传入多个 label 时失败(hugegraph #50)
- Memory 后端未初始化 task 相关的 schema(hugegraph #100)
- 当使用 hasLabel 查询时,如果元素数量超过 80w,即使加上 limit 也会报错(hugegraph #104)
- 任务的在运行之后没有保存过状态(hugegraph #113)
- 检查后端版本信息时直接强转 HugeGraphAuthProxy 为 HugeGraph(hugegraph #127)
- 配置项 batch.max_vertices_per_batch 未生效(hugegraph #130)
- 配置文件 rest-server.properties 有错误时 HugeGraphServer 启动不报错,但是无法访问(hugegraph #131)
- MySQL 后端某个线程的提交对其他线程不可见(hugegraph #163)
- 使用 union(branch) + has(date) 查询时提示 String 无法转换为 Date(hugegraph #181)
- 使用 RocksDB 后端带 limit 查询顶点时会返回不完整的结果(hugegraph #197)
- 提示其他线程无法操作 tx(hugegraph #204)
内部修改
- 拆分 graph.cache_xx 配置项为 vertex.cache_xx 和 edge.cache_xx 两类(hugegraph #56)
- 去除 hugegraph-dist 对 hugegraph-api 的依赖(hugegraph #61)
- 优化集合取交集和取差集的操作(hugegraph #85)
- 优化 transaction 的缓存处理和索引及 Id 查询(hugegraph #105)
- 给各线程池的线程命名(hugegraph #124)
- 增加并优化了一些 metrics 统计(hugegraph #138)
- 增加了对未完成任务的 metrics 记录(hugegraph #141)
- 让索引更新以分批方式提交,而不是全量提交(hugegraph #150)
- 在添加顶点/边时一直持有 schema 的读锁,直到提交/回滚完成(hugegraph #180)
- 加速 Tinkerpop 测试(hugegraph #19)
- 修复 Tinkerpop 测试在 resource 目录下找不到 filter 文件的 BUG(hugegraph #26)
- 开启 Tinkerpop 测试中 supportCustomIds 特性(hugegraph #69)
- 持续集成中添加 HBase 后端的测试(hugegraph #41)
- 避免持续集成的 deploy 脚本运行多次(hugegraph #170)
- 修复 cache 单元测试跑不过的问题(hugegraph #177)
- 持续集成中修改部分后端的存储为 tmpfs 以加快测试速度(hugegraph #206)
其它
- 增加 issue 模版(hugegraph #42)
- 增加 CONTRIBUTING 文件(hugegraph #59)
Loader
功能更新
- 支持忽略源文件某些特定列(hugegraph-loader #2)
- 支持导入 cardinality 为 Set 的属性数据(hugegraph-loader #10)
- 单条插入也使用多个线程执行,解决了错误多时最后单条导入慢的问题(hugegraph-loader #12)
BUG修复
- 导入过程可能统计出错(hugegraph-loader #4)
- 顶点使用自定义 Number Id 导入出错(hugegraph-loader #6)
- 顶点使用联合主键时导入出错(hugegraph-loader #18)
内部修改
- 增加持续集成测试(hugegraph-loader #8)
- 优化检测到文件不存在时的提示信息(hugegraph-loader #16)
Tools
功能更新
- 增加 KgDumper (hugegraph-tools #6)
- 支持在恢复模式和合并模式中恢复图(hugegraph-tools #9)
BUG修复
- 脚本中的工具函数 get_ip 在系统未安装 ifconfig 时报错(hugegraph-tools #13)
9.7 - HugeGraph 0.7 Release Notes
API & Java Client
功能更新
- 支持异步删除元数据和重建索引(HugeGraph-889)
- 加入监控API,并与Gremlin的监控框架集成(HugeGraph-1273)
BUG修复
- EdgeAPI更新属性时会将属性值也置为属性键(HugeGraph-81)
- 当删除顶点或边时,如果id非法应该返回400错误而非404(HugeGraph-1337)
Core
功能更新
- 支持HBase后端存储(HugeGraph-1280)
- 增加异步API框架,耗时操作可通过调用异步API实现(HugeGraph-387)
- 支持对长属性列建立二级索引,取消目前索引列长度256字节的限制(HugeGraph-1314)
- 支持顶点属性的“创建或更新”操作(HugeGraph-1303)
- 支持全文检索功能(HugeGraph-1322)
- 支持数据库表的版本号检查(HugeGraph-1328)
- 删除顶点时,如果遇到超级点的时候报错"Batch too large"或“Batch 65535 statements”(HugeGraph-1354)
- 支持异步删除元数据和重建索引(HugeGraph-889)
- 支持异步长时间执行Gremlin任务(HugeGraph-889)
BUG修复
- 防止超级点访问时查询过多下一层顶点而阻塞服务(HugeGraph-1302)
- HBase初始化时报错连接已经关闭(HugeGraph-1318)
- 按照date属性过滤顶点报错String无法转为Date(HugeGraph-1319)
- 残留索引删除,对range索引的判断存在错误(HugeGraph-1291)
- 支持组合索引后,残留索引清理没有考虑索引组合的情况(HugeGraph-1311)
- 根据otherV的条件来删除边时,可能会因为边的顶点不存在导致错误(HugeGraph-1347)
- label索引对offset和limit结果错误(HugeGraph-1329)
- vertex label或者edge label没有开启label index,删除label会导致数据无法删除(HugeGraph-1355)
内部修改
- hbase后端代码引入较新版本的Jackson-databind包,导致HugeGraphServer启动异常(HugeGraph-1306)
- Core和Client都自己持有一个shard类,而不是依赖于common模块(HugeGraph-1316)
- 去掉rebuild index和删除vertex label和edge label时的80w的capacity限制(HugeGraph-1297)
- 所有schema操作需要考虑同步问题(HugeGraph-1279)
- 拆分Cassandra的索引表,把element id每条一行,避免聚合高时,导入速度非常慢甚至卡住(HugeGraph-1304)
- 将hugegraph-test中关于common的测试用例移动到hugegraph-common中(HugeGraph-1297)
- 异步任务支持保存任务参数,以支持任务恢复(HugeGraph-1344)
- 支持通过脚本部署文档到GitHub(HugeGraph-1351)
- RocksDB和Hbase后端索引删除实现(HugeGraph-1317)
Loader
功能更新
- HugeLoader支持用户手动创建schema,以文件的方式传入(HugeGraph-1295)
BUG修复
- HugeLoader导数据时未区分输入文件的编码,导致可能产生乱码(HugeGraph-1288)
- HugeLoader打包的example目录的三个子目录下没有文件(HugeGraph-1288)
- 导入的CSV文件中如果数据列本身包含逗号会解析出错(HugeGraph-1320)
- 批量插入避免单条失败导致整个batch都无法插入(HugeGraph-1336)
- 异常信息作为模板打印异常(HugeGraph-1345)
- 导入边数据,当列数不对时导致程序退出(HugeGraph-1346)
- HugeLoader的自动创建schema失败(HugeGraph-1363)
- ID长度检查应该检查字节长度而非字符串长度(HugeGraph-1374)
内部修改
- 添加测试用例(HugeGraph-1361)
Tools
功能更新
- backup/restore使用多线程加速,并增加retry机制(HugeGraph-1307)
- 一键部署支持传入路径以存放包(HugeGraph-1325)
- 实现dump图功能(内存构建顶点及关联边)(HugeGraph-1339)
- 增加backup-scheduler功能,支持定时备份且保留一定数目最新备份(HugeGraph-1326)
- 增加异步任务查询和异步执行Gremlin的功能(HugeGraph-1357)
BUG修复
- hugegraph-tools的backup和restore编码为UTF-8(HugeGraph-1321)
- hugegraph-tools设置默认JVM堆大小和发布版本号(HugeGraph-1340)
Studio
BUG修复
- HugeStudio中顶点id包含换行符时g.V()会导致groovy解析出错(HugeGraph-1292)
- 限制返回的顶点及边的数量(HugeGraph-1333)
- 加载note出现消失或者卡住情况(HugeGraph-1353)
- HugeStudio打包时,编译失败但没有报错,导致发布包无法启动(HugeGraph-1368)
9.8 - HugeGraph 0.6 Release Notes
API & Java Client
功能更新
- 增加RESTFul API paths和crosspoints,找出source到target顶点间多条路径或包含交叉点的路径(HugeGraph-1210)
- 在API层添加批量插入并发数的控制,避免出现全部的线程都用于写而无法查询的情况(HugeGraph-1228)
- 增加scan-API,允许客户端并发地获取顶点和边(HugeGraph-1197)
- Client支持传入用户名密码访问带权限控制的HugeGraph(HugeGraph-1256)
- 为顶点及边的list API添加offset参数(HugeGraph-1261)
- RESTful API的顶点/边的list不允许同时传入page 和 [label,属性](HugeGraph-1262)
- k-out、K-neighbor、paths、shortestpath等API增加degree、capacity和limit(HugeGraph-1176)
- 增加restore status的set/get/clear接口(HugeGraph-1272)
BUG修复
- 使 RestClient的basic auth使用Preemptive模式(HugeGraph-1257)
- HugeGraph-Client中由ResultSet获取多次迭代器,除第一次外其他的无法迭代(HugeGraph-1278)
Core
功能更新
- RocksDB实现scan特性(HugeGraph-1198)
- Schema userdata 提供删除 key 功能(HugeGraph-1195)
- 支持date类型属性的范围查询(HugeGraph-1208)
- limit下沉到backend,尽可能不进行多余的索引读取(HugeGraph-1234)
- 增加 API 权限与访问控制(HugeGraph-1162)
- 禁止多个后端配置store为相同的值(HugeGraph-1269)
BUG修复
- RocksDB的Range查询时如果只指定上界或下界会查出其他IndexLabel的记录(HugeGraph-1211)
- RocksDB带limit查询时,graphTransaction查询返回的结果多一个(HugeGraph-1234)
- init-store在CentOS上依赖通用的io.netty有时会卡住,改为使用netty-transport-native-epoll(HugeGraph-1255)
- Cassandra后端in语句(按id查询)元素个数最大65535(HugeGraph-1239)
- 主键加索引(或普通属性)作为查询条件时报错(HugeGraph-1276)
- init-store.sh在Centos平台上初始化失败或者卡住(HugeGraph-1255)
测试
无
内部修改
- 将compareNumber方法搬移至common模块(HugeGraph-1208)
- 修复HugeGraphServer无法在Ubuntu机器上启动的Bug(HugeGraph-1154)
- 修复init-store.sh无法在bin目录下执行的BUG(HugeGraph-1223)
- 修复HugeGraphServer启动过程中无法通过CTRL+C终止的BUG(HugeGraph-1223)
- HugeGraphServer启动前检查端口是否被占用(HugeGraph-1223)
- HugeGraphServer启动前检查系统JDK是否安装以及版本是否为1.8(HugeGraph-1223)
- 给HugeConfig类增加getMap()方法(HugeGraph-1236)
- 修改默认配置项,后端使用RocksDB,注释重要的配置项(HugeGraph-1240)
- 重命名userData为userdata(HugeGraph-1249)
- centos 4.3系统HugeGraphServer进程使用jps命令查不到
- 增加配置项ALLOW_TRACE,允许设置是否返回exception stack trace(HugeGraph-81)
Tools
功能更新
- 增加自动化部署工具以安装所有组件(HugeGraph-1267)
- 增加clear的脚本,并拆分deploy和start-all(HugeGraph-1274)
- 对hugegraph服务进行监控以提高可用性(HugeGraph-1266)
- 增加backup/restore功能和命令(HugeGraph-1272)
- 增加graphs API对应的命令(HugeGraph-1272)
BUG修复
Loader
功能更新
- 默认添加csv及json的示例(HugeGraph-1259)
BUG修复
9.9 - HugeGraph 0.5 Release Notes
API & Java Client
功能更新
- VertexLabel与EdgeLabel增加bool参数enable_label_index表述是否构建label索引(HugeGraph-1085)
- 增加RESTful API来支持高效shortest path,K-out和K-neighbor查询(HugeGraph-944)
- 增加RESTful API支持按id列表批量查询顶点(HugeGraph-1153)
- 支持迭代获取全部的顶点和边,使用分页实现(HugeGraph-1166)
- 顶点id中包含 / % 等 URL 保留字符时通过 VertexAPI 查不出来(HugeGraph-1127)
- 批量插入边时是否检查vertex的RESTful API参数从checkVertex改为check_vertex (HugeGraph-81)
BUG修复
- hasId()无法正确匹配LongId(HugeGraph-1083)
Core
功能更新
- RocksDB支持常用配置项(HugeGraph-1068)
- 支持插入、删除、更新等操作的限速(HugeGraph-1071)
- 支持RocksDB导入sst文件方案(HugeGraph-1077)
- 增加MySQL后端存储(HugeGraph-1091)
- 增加Palo后端存储(HugeGraph-1092)
- 增加开关:支持是否构建顶点/边的label index(HugeGraph-1085)
- 支持API分页获取数据(HugeGraph-1105)
- RocksDB配置的数据存放目录如果不存在则自动创建(HugeGraph-1135)
- 增加高级遍历函数shortest path、K-neighbor,K-out和按id列表批量查询顶点(HugeGraph-944)
- init-store.sh增加超时重试机制(HugeGraph-1150)
- 将边表拆分两个表:OUT表、IN表(HugeGraph-1002)
- 限制顶点ID最大长度为128字节(HugeGraph-1168)
- Cassandra通过压缩数据(可配置snappy、lz4)进行优化(HugeGraph-428)
- 支持IN和OR操作(HugeGraph-137)
- 支持RocksDB并行写多个磁盘(HugeGraph-1177)
- MySQL通过批量插入进行性能优化(HugeGraph-1188)
BUG修复
- Kryo系列化多线程时异常(HugeGraph-1066)
- RocksDB索引内容中重复写了两次elem-id(HugeGraph-1094)
- SnowflakeIdGenerator.instance在多线程环境下可能会初始化多个实例(HugeGraph-1095)
- 如果查询边的顶点但顶点不存在时,异常信息不够明确(HugeGraph-1101)
- RocksDB配置了多个图时,init-store失败(HugeGraph-1151)
- 无法支持 Date 类型的属性值(HugeGraph-1165)
- 创建了系统内部索引,但无法根据其进行搜索(HugeGraph-1167)
- 拆表后根据label删除边时,edge-in表中的记录未被删除成功(HugeGraph-1182)
测试
- 增加配置项:vertex.force_id_string,跑 tinkerpop 测试时打开(HugeGraph-1069)
内部修改
- common库OptionChecker增加allowValues()函数用于枚举值(HugeGraph-1075)
- 清理无用、版本老旧的依赖包,减少打包的压缩包的大小(HugeGraph-1078)
- HugeConfig通过文件路径构造时,无法检查多次配置的配置项的值(HugeGraph-1079)
- Server启动时可以支持智能分配最大内存(HugeGraph-1154)
- 修复Mac OS因为不支持free命令导致无法启动server的问题(HugeGraph-1154)
- 修改配置项的注册方式为字符串式,避免直接依赖Backend包(HugeGraph-1171)
- 增加StoreDumper工具以查看后端存储的数据内容(HugeGraph-1172)
- Jenkins把所有与内部服务器有关的构建机器信息都参数化传入(HugeGraph-1179)
- 将RestClient移到common模块,令server和client都依赖common(HugeGraph-1183)
- 增加配置项dump工具ConfDumper(HugeGraph-1193)
9.10 - HugeGraph 0.4.4 Release Notes
API & Java Client
功能更新
- HugeGraph-Server支持WebSocket,能用Gremlin-Console连接使用;并支持直接编写groovy脚本调用Core的代码(HugeGraph-977)
- 适配Schema-id(HugeGraph-1038)
BUG修复
- hugegraph-0.3.3:删除vertex的属性,body中properties=null,返回500,空指针(HugeGraph-950)
- hugegraph-0.3.3: graph.schema().getVertexLabel() 空指针(HugeGraph-955)
- HugeGraph-Client 中顶点和边的属性集合不是线程安全的(HugeGraph-1013)
- 批量操作的异常信息无法打印(HugeGraph-1013)
- 异常message提示可读性太差,都是用propertyKey的id显示,对于用户来说无法立即识别(HugeGraph-1055)
- 批量新增vertex实体,有一个body体为null,返回500,空指针(HugeGraph-1056)
- 追加属性body体中只包含properties,功能出现回退,抛出异常The label of vertex can’t be null(HugeGraph-1057)
- HugeGraph-Client适配:PropertyKey的DateType中Timestamp替换成Date(HugeGraph-1059)
- 创建IndexLabel时baseValue为空会报出500错误(HugeGraph-1061)
Core
功能更新
- 实现上层独立事务管理,并兼容tinkerpop事务规范(HugeGraph-918、HugeGraph-941)
- 完善memory backend,可以通过API正确访问,且适配了tinkerpop事务(HugeGraph-41)
- 增加RocksDB后端存储驱动框架(HugeGraph-929)
- RocksDB数字索引range-query实现(HugeGraph-963)
- 为所有的schema增加了id,并将各表原依赖name的列也换成id(HugeGraph-589)
- 填充query key-value条件时,value的类型如果不匹配key定义的类型时需要转换为该类型(HugeGraph-964)
- 统一各后端的offset、limit实现(HugeGraph-995)
- 查询顶点、边时,Core支持迭代方式返回结果,而非一次性载入内存(HugeGraph-203)
- memory backend支持range query(HugeGraph-967)
- memory backend的secondary的支持方式从遍历改为IdQuery(HugeGraph-996)
- 联合索引支持复杂的(只要逻辑上可以查都支持)多种索引组合查询(HugeGraph-903)
- Schema中增加存储用户数据的域(map)(HugeGraph-902)
- 统一ID的解析及系列化(包括API及Backend)(HugeGraph-965)
- RocksDB没有keyspace概念,需要完善对多图实例的支持(HugeGraph-973)
- 支持Cassandra设置连接用户名密码(HugeGraph-999)
- Schema缓存支持缓存所有元数据(get-all-schema)(HugeGraph-1037)
- 目前依然保持schema对外暴露name,暂不直接使用schema id(HugeGraph-1032)
- 用户传入ID的策略的修改为支持String和Number(HugeGraph-956)
BUG修复
- 删除旧的前缀indexLabel时数据库中的schemaLabel对象还有残留(HugeGraph-969)
- HugeConfig解析时共用了公共的Option,导致不同graph的配置项有覆盖(HugeGraph-984)
- 数据库数据不兼容时,提示更加友好的异常信息(HugeGraph-998)
- 支持Cassandra设置连接用户名密码(HugeGraph-999)
- RocksDB deleteRange end溢出后触发RocksDB assert错误(HugeGraph-971)
- 允许根据null值id进行查询顶点/边,返回结果为空集合(HugeGraph-1045)
- 内存中存在部分更新数据未提交时,搜索结果不对(HugeGraph-1046)
- g.V().hasLabel(XX)传入不存在的label时报错: Internal Server Error and Undefined property key: ‘~label’(HugeGraph-1048)
- gremlin获取的的schema只剩下名称字符串(HugeGraph-1049)
- 大量数据情况下无法进行count操作(HugeGraph-1051)
- RocksDB持续插入6~8千万条边时卡住(HugeGraph-1053)
- 整理属性类型的支持,并在BinarySerializer中使用二进制格式系列化属性值(HugeGraph-1062)
测试
- 增加tinkerpop的performance测试(HugeGraph-987)
内部修改
- HugeFactory打开同一个图(name相同者)时,共用HugeGraph对象即可(HugeGraph-983)
- 规范索引类型命名secondary、range、search(HugeGraph-991)
- 数据库数据不兼容时,提示更加友好的异常信息(HugeGraph-998)
- IO部分的 gryo 和 graphson 的module分开(HugeGraph-1041)
- 增加query性能测试到PerfExample中(HugeGraph-1044)
- 关闭gremlin-server的metric日志(HugeGraph-1050)
9.11 - HugeGraph 0.3.3 Release Notes
API & Java Client
功能更新
- 为vertex-label和edge-label增加可空属性集合,允许在create和append时指定(HugeGraph-245)
- 配合core的功能为用户提供tinkerpop variables RESTful API(HugeGraph-396)
- 支持顶点/边属性的更新和删除(HugeGraph-894)
- 支持顶点/边的条件查询(HugeGraph-919)
BUG修复
- HugeGraph-API接收的RequestBody为null或"“时抛出空指针异常(HugeGraph-795)
- 为HugeGraph-API添加输入参数检查,避免抛出空指针异常(HugeGraph-796 ~ HugeGraph-798,HugeGraph-802,HugeGraph-808 ~ HugeGraph-814,HugeGraph-817,HugeGraph-823,HugeGraph-860)
- 创建缺失outV-label 或者 inV-label的实体边,依然能够被创建成功,不符合需求(HugeGraph-835)
- 创建vertex-label和edge-label时可以任意传入index-names(HugeGraph-837)
- 创建index,base-type=“VERTEX”等值(期望VL、EL),返回500(HugeGraph-846)
- 创建index,base-type和base-value不匹配,提示不友好(HugeGraph-848)
- 删除已经不存在的两个实体之间的关系,schema返回204,顶点和边类型的则返回404(期望统一为404)(HugeGraph-853,HugeGraph-854)
- 给vertex-label追加属性,缺失id-strategy,返回信息有误(HugeGraph-861)
- 给edge-label追加属性,name缺失,提示信息有误(HugeGraph-862)
- 给edge-label追加属性,source-label为“null”,提示信息有误(HugeGraph-863)
- 查询时的StringId如果为空字符串应该抛出异常(HugeGraph-868)
- 通Rest API创建两个顶点之间的边,在studio中通过g.V()则刚新创建的边则不显示,g.E()则能够显示新创建的边(HugeGraph-869)
- HugeGraph-Server的内部错误500,不应该将stack trace返回给Client(HugeGraph-879)
- addEdge传入空的id字符串时会抛出非法参数异常(HugeGraph-885)
- HugeGraph-Client 的 Gremlin 查询结果在解析 Path 时,如果不包含Vertex/Edge会反序列化异常(HugeGraph-891)
- 枚举HugeKeys的字符串变成小写字母加下划线,导致API序列化时字段名与类中变量名不一致,进而序列化失败(HugeGraph-896)
- 增加边到不存在的顶点时返回404(期望400)(HugeGraph-922)
Core
功能更新
- 支持对顶点/边属性(包括索引列)的更新操作(HugeGraph-369)
- 索引field为空或者空字符串的支持(hugegraph-553和hugegraph-288)
- vertex/edge的属性一致性保证推迟到实际要访问属性时(hugegraph-763)
- 增加ScyllaDB后端驱动(HugeGraph-772)
- 支持tinkerpop的hasKey、hasValue查询(HugeGraph-826)
- 支持tinkerpop的variables功能(HugeGraph-396)
- 以“~”为开头的为系统隐藏属性,用户不可以创建(HugeGraph-842)
- 增加Backend Features以兼容不同后端的特性(HugeGraph-844)
- 对mutation的update可能出现的操作不直接抛错,进行细化处理(HugeGraph-887)
- 对append到vertex-label/edge-label的property检查,必须是nullable的(HugeGraph-890)
- 对于按照id查询,当有的id不存在时,返回其余存在的对象,而非直接抛异常(HugeGraph-900)
BUG修复
- Vertex.edges(Direction.BOTH,…) assert error(HugeGraph-661)
- 无法支持在addVertex函数中对同一property(single)多次赋值(HugeGraph-662)
- 更新属性时不涉及更新的索引列会丢失(HugeGraph-801)
- GraphTransaction中的ConditionQuery需要索引查询时,没有触发commit,导致查询失败(HugeGraph-805)
- Cassandra不支持query offset,查询时limit=offset+limit取回所有记录后过滤(HugeGraph-851)
- 多个插入操作加上一个删除操作,插入操作会覆盖删除操作(HugeGraph-857)
- 查询时的StringId如果为空字符串应该抛出异常(HugeGraph-868)
- 元数据schema方法只返回 hidden 信息(HugeGraph-912)
测试
- tinkerpop的structure和process测试使用不同的keyspace(HugeGraph-763)
- 将tinkerpop测试和unit测试添加到流水线release-after-merge中(HugeGraph-763)
- jenkins脚本分离各阶段子脚本,修改项目中的子脚本即可生效构建(HugeGraph-800)
- 增加clear backends功能,在tinkerpop suite运行完成后清除后端(HugeGraph-852)
- 增加BackendMutation的测试(HugeGraph-801)
- 多线程操作图时可能抛出NoHostAvailableException异常(HugeGraph-883)
内部修改
- 调整HugeGraphServer和HugeGremlinServer启动时JVM的堆内存初始为256M,最大为2048M(HugeGraph-218)
- 创建Cassandra Table时,使用schemaBuilder代替字符串拼接(hugegraph-773)
- 运行测试用例时如果初始化图失败(比如数据库连接不上),clear()报错(HugeGraph-910)
- Example抛异常 Need to specify a readable config file rather than…(HugeGraph-921)
- HugeGraphServer和HugeGreminServer的缓存保持同步(HugeGraph-569)
9.12 - HugeGraph 0.2 Release Notes
API & Java Client
功能更新
0.2版实现了图数据库基本功能,提供如下功能:
元数据(Schema)
顶点类型(Vertex Label)
- 创建顶点类型
- 删除顶点类型
- 查询顶点类型
- 增加顶点类型的属性
边类型(Edge Label)
- 创建边类型
- 删除边类型
- 查询边类型
- 增加边类型的属性
属性(Property Key)
- 创建属性
- 删除属性
- 查询属性
索引(Index Label)
- 创建索引
- 删除索引
- 查询索引
元数据检查
- 元数据依赖的其它元数据检查(如Vertex Label依赖Property Key)
- 数据依赖的元数据检查(如Vertex依赖Vertex Label)
图数据
顶点(Vertex)
增加顶点
删除顶点
增加顶点属性
删除顶点属性(必须为非索引列)
批量插入顶点
查询
批量查询
顶点ID策略
- 用户指定ID(字符串)
- 用户指定某些属性组合作为ID(拼接为可见字符串)
- 自动生成ID
边(Edge)
- 增加边
- 增加多条同类型边到指定的两个节点(SortKey)
- 删除边
- 增加边属性
- 删除边属性(必须为非索引列)
- 批量插入边
- 查询
- 批量查询
顶点/边属性
属性类型支持
- text
- boolean
- byte、blob
- int、long
- float、double
- timestamp
- uuid
支持单值属性
支持多值属性:List、Set(注意:非嵌套属性)
事务
- 原子性级别保证(依赖后端)
- 自动提交事务
- 手动提交事务
- 并行事务
索引
索引类型
- 二级索引
- 范围索引(数字类型)
索引操作
- 为指定类型的顶点/边创建单列索引(不支持List或Set列创建索引)
- 为指定类型的顶点/边创建复合索引(不支持List或Set列创建索引,复合索引为前缀索引)
- 删除指定类型顶点/边的索引(部分或全部索引均可)
- 重建指定类型顶点/边的索引(部分或全部索引均可)
查询/遍历
列出所有元数据、图数据(支持Limit,不支持分页)
根据ID查询元数据、图数据
根据指定属性的值查询图数据
根据指定属性的值范围查询图数据(属性必须为数字类型)
根据指定顶点/边类型、指定属性的值查询顶点/边
根据指定顶点/边类型、指定属性的值范围查询顶点(属性必须为数字类型)
根据顶点类型(Vertex Label)查询顶点
根据边类型(Edge Label)查询边
根据顶点查询边
- 查询顶点的所有边
- 查询顶点的指定方向边(出边、入边)
- 查询顶点的指定方向、指定类型边
- 查询两个顶点的同类型边中的某条边(SortKey)
标准Gremlin遍历
缓存
可缓存内容
- 元数据缓存
- 顶点缓存
缓存特性
- LRU策略
- 高性能并发访问
- 支持超时过期机制
接口(RESTful API)
- 版本号接口
- 图实例接口
- 元数据接口
- 图数据接口
- Gremlin接口
更多细节详见API文档
后端支持
支持Cassandra后端
- 持久化
- CQL3
- 集群
支持Memory后端(仅用于测试)
- 非持久化
- 部分特性无法支持(如:更新边属性、根据边类型查询边)
其它
支持配置项
- 后端存储类型
- 序列化方式
- 缓存参数
支持多图实例
- 静态方式(增加多个图配置文件)
版本检查
- 内部依赖包匹配版本检查
- API匹配版本检查
9.13 - HugeGraph 0.2.4 Release Notes
API & Java Client
功能更新
元数据(Schema)相关
BUG修复
- Vertex Label为非primary-key id策略应该允许属性为空(HugeGraph-651)
- Gremlin-Server 序列化的 EdgeLabel 仅有一个directed 属性,应该打印完整的schema描述(HugeGraph-680)
- 创建IndexLabel时使用不存在的属性抛出空指针异常,应该抛非法参数异常(HugeGraph-682)
- 创建schema如果已经存在并指定了ifNotExist时,结果应该返回原来的对象(HugeGraph-694)
- 由于EdgeLabel的Frequency默认为null以及不允许修改特性,导致Append操作传递null值在API层反序列化失败(HugeGraph-729)
- 增加对schema名称的正则检查配置项,默认不允许为全空白字符(HugeGraph-727)
- 中文名的schema在前端显示为乱码(HugeGraph-711)
图数据(Vertex、Edge)相关
功能更新
- DataType支持Array,并且List类型除了一个一个添加object,也需要支持直接赋值List对象(HugeGraph-719)
- 自动生成的顶点id由十进制改为十六进制(字符串存储时)(HugeGraph-785)
BUG修复
- HugeGraph-API的VertexLabel/EdgeLabel API未提供eliminate接口(HugeGraph-614)
- 增加非primary-key id策略的顶点时,如果属性为空无法插入到数据库中(HugeGraph-652)
- 使用HugeGraph-Client的gremlin发送无返回值groovy请求时,由于gremlin-server将无返回值序列化为null,导致前端迭代结果集时出现空指针异常(HugeGraph-664)
- RESTful API在没有找到对应id的vertex/edge时返回500(HugeGraph-734)
- HugeElement/HugeProperty的equals()与tinkerpop不兼容(HugeGraph-653)
- HugeEdgeProperty的property的equals函数与tinkerpop兼容 (HugeGraph-740)
- HugeElement/HugeVertexProperty的hashcode函数与tinkerpop不兼容(HugeGraph-728)
- HugeVertex/HugeEdge的toString函数与tinkerpop不兼容(HugeGraph-665)
- 与tinkerpop的异常不兼容,包括IllegalArgumentsException和UnsupportedOperationException(HugeGraph-667)
- 通过id无法找到element时,抛出的异常类型与tinkerpop不兼容(HugeGraph-689)
- vertex.addEdge没有检查properties的数目是否为2的倍数(HugeGraph-716)
- vertex.addEdge()时,assignId调用时机太晚,导致vertex的Set
中有重复的edge(HugeGraph-666) - 查询时包含大于等于三层逻辑嵌套时,会抛出ClassCastException,现改成抛出非法参数异常(HugeGraph-481)
- 边查询如果同时包含source-vertex/direction和property作为条件,查询结果错误(HugeGraph-749)
- HugeGraph-Server 在运行时如果 cassandra 宕掉,插入或查询操作时会抛出DataStax的异常以及详细的调用栈(HugeGraph-771)
- 删除不存在的 indexLabel 时会抛出异常,而删除其他三种元数据(不存在的)则不会(HugeGraph-782)
- 当传给EdgeApi的源顶点或目标顶点的id非法时,会因为查询不到该顶点向客户端返回404状态码(HugeGraph-784)
- 提供内部使用获取元数据的接口,使SchemaManager仅为外部使用,当获取不存在的schema时抛出NotFoundException异常(HugeGraph-743)
- HugeGraph-Client 创建/添加/移除 元数据都应该返回来自服务端的结果(HugeGraph-760)
- 创建HugeGraph-Client时如果输入了错误的主机会导致进程阻塞,无法响应(HugeGraph-718)
查询、索引、缓存相关
功能更新
- 缓存更新更加高效的锁方案(HugeGraph-555)
- 索引查询增加支持只有一个元素的IN语句(原来仅支持EQ)(HugeGraph-739)
BUG修复
- 防止请求数据量过大时服务本身hang住(HugeGraph-777)
其它
功能更新
- 使Init-Store仅用于初始化数据库,清空后端由独立脚本实现(HugeGraph-650)
BUG修复
- 单元测试跑完后在测试机上遗留了临时的keyspace(HugeGraph-611)
- Cassandra的info日志信息过多,将大部分修改为debug级别(HugeGraph-722)
- EventHub.containsListener(String event)判断逻辑有遗漏(HugeGraph-732)
- EventHub.listeners/unlisten(String event)当没有对应event的listener时会抛空指针异常(HugeGraph-733)
测试
Tinkerpop合规测试
- 增加自定义ignore机制,规避掉暂时不需要加入持续集成的测试用例(HugeGraph-647)
- 为TestGraph注册GraphSon和Kryo序列化器,实现 IdGenerator$StringId 的 graphson-v1、graphson-v2 和 Kryo的序列化与反序列化(HugeGraph-660)
- 增加了可配置的测试用例过滤器,使得tinkerpop测试可以用在开发分支和发布分支的回归测试中
- 将tinkerpop测试通过配置文件,加入到回归测试中
单元测试
- 增加Cache及Event的单元测试(HugeGraph-659)
- HugeGraph-Client 增加API的测试(99个)
- HugeGraph-Client 增加单元测试,包括RestResult反序列化的单测(12个)
内部修改
- 改进LOG变量方面代码(HugeGraph-623/HugeGraph-631)
- License格式调整(HugeGraph-625)
- 将序列化器中持有的graph抽离,要用到graph的函数通过传参数实现 (HugeGraph-750)
10 - Contribution Guidelines
10.1 - 如何参与 HugeGraph 社区
TODO: translate this article to Chinese
Thanks for taking the time to contribute! As an open source project, HugeGraph is looking forward to be contributed from everyone, and we are also grateful to all the contributors.
The following is a contribution guide for HugeGraph:
1. Preparation
We can contribute by reporting issues, submitting code patches or any other feedback.
Before submitting the code, we need to do some preparation:
Sign up or login to GitHub: https://github.com
Fork HugeGraph repo from GitHub: https://github.com/apache/incubator-hugegraph/fork
Clone code from fork repo to local: https://github.com/${GITHUB_USER_NAME}/hugegraph
# clone code from remote to local repo
git clone https://github.com/${GITHUB_USER_NAME}/hugegraph
Configure local HugeGraph repo
cd hugegraph
diff --git a/cn/docs/clients/_print/index.html b/cn/docs/clients/_print/index.html
index f58247cbf..cb9ea2f33 100644
--- a/cn/docs/clients/_print/index.html
+++ b/cn/docs/clients/_print/index.html
@@ -1,22 +1,23 @@
API | HugeGraph
This is the multi-page printable view of this section.
Click here to print.
API
- 1: HugeGraph RESTful API
- 1.1: Schema API
- 1.2: PropertyKey API
- 1.3: VertexLabel API
- 1.4: EdgeLabel API
- 1.5: IndexLabel API
- 1.6: Rebuild API
- 1.7: Vertex API
- 1.8: Edge API
- 1.9: Traverser API
- 1.10: Rank API
- 1.11: Variable API
- 1.12: Graphs API
- 1.13: Task API
- 1.14: Gremlin API
- 1.15: Authentication API
- 1.16: Other API
- 2: HugeGraph Java Client
- 3: Gremlin-Console
1 - HugeGraph RESTful API
HugeGraph-Server通过HugeGraph-API基于HTTP协议为Client提供操作图的接口,主要包括元数据和
-图数据的增删改查,遍历算法,变量,图操作及其他操作。
1.1 - Schema API
1.1 Schema
HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema
-
Response Status
200
+图数据的增删改查,遍历算法,变量,图操作及其他操作。
1.1 - Schema API
1.1 Schema
HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。
Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
+
+e.g: GET http://localhost:8080/graphs/hugegraph/schema
+
Response Status
200
Response Body
{
"propertykeys": [
{
"id": 7,
"name": "price",
- "data_type": "INT",
+ "data_type": "DOUBLE",
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.741"
+ "~create_time": "2023-05-08 17:49:05.316"
}
},
{
@@ -26,11 +27,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.729"
+ "~create_time": "2023-05-08 17:49:05.309"
}
},
{
@@ -40,11 +40,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.691"
+ "~create_time": "2023-05-08 17:49:05.287"
}
},
{
@@ -54,11 +53,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.678"
+ "~create_time": "2023-05-08 17:49:05.280"
}
},
{
@@ -68,11 +66,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.718"
+ "~create_time": "2023-05-08 17:49:05.301"
}
},
{
@@ -82,11 +79,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.707"
+ "~create_time": "2023-05-08 17:49:05.294"
}
},
{
@@ -96,11 +92,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.609"
+ "~create_time": "2023-05-08 17:49:05.250"
}
}
],
@@ -113,9 +108,11 @@
"name"
],
"nullable_keys": [
- "age"
+ "age",
+ "city"
],
"index_labels": [
+ "personByAge",
"personByCity",
"personByAgeAndCity"
],
@@ -128,19 +125,15 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:40.783"
+ "~create_time": "2023-05-08 17:49:05.336"
}
},
{
"id": 2,
"name": "software",
- "id_strategy": "PRIMARY_KEY",
- "primary_keys": [
- "name"
- ],
- "nullable_keys": [
- "price"
- ],
+ "id_strategy": "CUSTOMIZE_NUMBER",
+ "primary_keys": [],
+ "nullable_keys": [],
"index_labels": [
"softwareByPrice"
],
@@ -153,7 +146,7 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:40.840"
+ "~create_time": "2023-05-08 17:49:05.347"
}
}
],
@@ -163,13 +156,9 @@
"name": "knows",
"source_label": "person",
"target_label": "person",
- "frequency": "MULTIPLE",
- "sort_keys": [
- "date"
- ],
- "nullable_keys": [
- "weight"
- ],
+ "frequency": "SINGLE",
+ "sort_keys": [],
+ "nullable_keys": [],
"index_labels": [
"knowsByWeight"
],
@@ -181,7 +170,7 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:41.840"
+ "~create_time": "2023-05-08 17:49:08.437"
}
},
{
@@ -190,11 +179,8 @@
"source_label": "person",
"target_label": "software",
"frequency": "SINGLE",
- "sort_keys": [
- ],
- "nullable_keys": [
- "weight"
- ],
+ "sort_keys": [],
+ "nullable_keys": [],
"index_labels": [
"createdByDate",
"createdByWeight"
@@ -207,13 +193,27 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:41.868"
+ "~create_time": "2023-05-08 17:49:08.446"
}
}
],
"indexlabels": [
{
"id": 1,
+ "name": "personByAge",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "person",
+ "index_type": "RANGE_INT",
+ "fields": [
+ "age"
+ ],
+ "status": "CREATED",
+ "user_data": {
+ "~create_time": "2023-05-08 17:49:05.375"
+ }
+ },
+ {
+ "id": 2,
"name": "personByCity",
"base_type": "VERTEX_LABEL",
"base_value": "person",
@@ -223,68 +223,68 @@
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.886"
+ "~create_time": "2023-05-08 17:49:06.898"
}
},
{
- "id": 4,
- "name": "createdByDate",
- "base_type": "EDGE_LABEL",
- "base_value": "created",
+ "id": 3,
+ "name": "personByAgeAndCity",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "person",
"index_type": "SECONDARY",
"fields": [
- "date"
+ "age",
+ "city"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.878"
+ "~create_time": "2023-05-08 17:49:07.407"
}
},
{
- "id": 5,
- "name": "createdByWeight",
- "base_type": "EDGE_LABEL",
- "base_value": "created",
+ "id": 4,
+ "name": "softwareByPrice",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "software",
"index_type": "RANGE_DOUBLE",
"fields": [
- "weight"
+ "price"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:42.117"
+ "~create_time": "2023-05-08 17:49:07.916"
}
},
{
- "id": 2,
- "name": "personByAgeAndCity",
- "base_type": "VERTEX_LABEL",
- "base_value": "person",
+ "id": 5,
+ "name": "createdByDate",
+ "base_type": "EDGE_LABEL",
+ "base_value": "created",
"index_type": "SECONDARY",
"fields": [
- "age",
- "city"
+ "date"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.351"
+ "~create_time": "2023-05-08 17:49:08.454"
}
},
{
- "id": 3,
- "name": "softwareByPrice",
- "base_type": "VERTEX_LABEL",
- "base_value": "software",
- "index_type": "RANGE_INT",
+ "id": 6,
+ "name": "createdByWeight",
+ "base_type": "EDGE_LABEL",
+ "base_value": "created",
+ "index_type": "RANGE_DOUBLE",
"fields": [
- "price"
+ "weight"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.587"
+ "~create_time": "2023-05-08 17:49:08.963"
}
},
{
- "id": 6,
+ "id": 7,
"name": "knowsByWeight",
"base_type": "EDGE_LABEL",
"base_value": "knows",
@@ -294,13 +294,13 @@
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:42.376"
+ "~create_time": "2023-05-08 17:49:09.473"
}
}
]
}
-
1.2 - PropertyKey API
1.2 PropertyKey
Params说明:
- name:属性类型名称,必填
- data_type:属性类型数据类型,包括:bool、byte、int、long、float、double、string、date、uuid、blob,默认string类型
- cardinality:属性类型基数,包括:single、list、set,默认single
请求体字段说明:
- id:属性类型id值
- properties:属性的属性,对于属性而言,此项为空
- user_data:设置属性类型的通用信息,比如可设置age属性的取值范围,最小为0,最大为100;目前此项不做任何校验,只为后期拓展提供预留入口
1.2.1 创建一个 PropertyKey
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/propertykeys
-
Request Body
{
+
1.2 - PropertyKey API
1.2 PropertyKey
Params说明:
- name:属性类型名称,必填
- data_type:属性类型数据类型,包括:bool、byte、int、long、float、double、string、date、uuid、blob,默认string类型
- cardinality:属性类型基数,包括:single、list、set,默认single
请求体字段说明:
- id:属性类型id值
- properties:属性的属性,对于属性而言,此项为空
- user_data:设置属性类型的通用信息,比如可设置age属性的取值范围,最小为0,最大为100;目前此项不做任何校验,只为后期拓展提供预留入口
1.2.1 创建一个 PropertyKey
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/propertykeys
+
Request Body
{
"name": "age",
"data_type": "INT",
"cardinality": "SINGLE"
@@ -322,8 +322,8 @@
},
"task_id": 0
}
-
1.2.2 为已存在的 PropertyKey 添加或移除 userdata
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/propertykeys/age?action=append
-
Request Body
{
+
1.2.2 为已存在的 PropertyKey 添加或移除 userdata
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/propertykeys/age?action=append
+
Request Body
{
"name": "age",
"user_data": {
"min": 0,
@@ -349,8 +349,8 @@
},
"task_id": 0
}
-
1.2.3 获取所有的 PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys
-
Response Status
200
+
1.2.3 获取所有的 PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys
+
Response Status
200
Response Body
{
"propertykeys": [
{
@@ -411,8 +411,8 @@
}
]
}
-
1.2.4 根据name获取PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
-
其中,age
为要获取的PropertyKey的名字
Response Status
200
+
1.2.4 根据name获取PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
+
其中,age
为要获取的 PropertyKey 的名称
Response Status
200
Response Body
{
"id": 1,
"name": "age",
@@ -428,13 +428,13 @@
"~create_time": "2022-05-13 13:47:23.745"
}
}
-
1.2.5 根据name删除PropertyKey
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
-
其中,age
为要获取的PropertyKey的名字
Response Status
202
+
1.2.5 根据 name 删除 PropertyKey
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
+
其中,age
为要删除的 PropertyKey 的名称
Response Status
202
Response Body
{
"task_id" : 0
}
-
1.3 - VertexLabel API
1.3 VertexLabel
假设已经创建好了1.1.3中列出来的 PropertyKeys
Params说明
- id:顶点类型id值
- name:顶点类型名称,必填
- id_strategy: 顶点类型的ID策略,主键ID、自动生成、自定义字符串、自定义数字、自定义UUID,默认主键ID
- properties: 顶点类型关联的属性类型
- primary_keys: 主键属性,当ID策略为PRIMARY_KEY时必须有值,其他ID策略时必须为空;
- enable_label_index: 是否开启类型索引,默认关闭
- index_names:顶点类型创建的索引,详情见3.4
- nullable_keys:可为空的属性
- user_data:设置顶点类型的通用信息,作用同属性类型
1.3.1 创建一个VertexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/vertexlabels
-
Request Body
{
+
1.3 - VertexLabel API
1.3 VertexLabel
假设已经创建好了1.1.3中列出来的 PropertyKeys
Params说明
- id:顶点类型id值
- name:顶点类型名称,必填
- id_strategy: 顶点类型的ID策略,主键ID、自动生成、自定义字符串、自定义数字、自定义UUID,默认主键ID
- properties: 顶点类型关联的属性类型
- primary_keys: 主键属性,当ID策略为PRIMARY_KEY时必须有值,其他ID策略时必须为空;
- enable_label_index: 是否开启类型索引,默认关闭
- index_names:顶点类型创建的索引,详情见3.4
- nullable_keys:可为空的属性
- user_data:设置顶点类型的通用信息,作用同属性类型
1.3.1 创建一个VertexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+
Request Body
{
"name": "person",
"id_strategy": "DEFAULT",
"properties": [
@@ -496,8 +496,8 @@
"ttl_start_time": "createdTime",
"enable_label_index": true
}
-
1.3.2 为已存在的VertexLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person?action=append
-
Request Body
{
+
1.3.2 为已存在的VertexLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person?action=append
+
Request Body
{
"name": "person",
"properties": [
"city"
@@ -530,8 +530,8 @@
"super": "animal"
}
}
-
1.3.3 获取所有的VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
-
Response Status
200
+
1.3.3 获取所有的VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+
Response Status
200
Response Body
{
"vertexlabels": [
{
@@ -578,8 +578,8 @@
}
]
}
-
1.3.4 根据name获取VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
-
Response Status
200
+
1.3.4 根据name获取VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
+
Response Status
200
Response Body
{
"id": 1,
"primary_keys": [
@@ -602,13 +602,13 @@
"super": "animal"
}
}
-
1.3.5 根据name删除VertexLabel
删除 VertexLabel 会导致删除对应的顶点以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
-
Response Status
202
+
1.3.5 根据name删除VertexLabel
删除 VertexLabel 会导致删除对应的顶点以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.4 - EdgeLabel API
1.4 EdgeLabel
假设已经创建好了1.2.3中的 PropertyKeys 和 1.3.3中的 VertexLabels
Params说明
- name:顶点类型名称,必填
- source_label: 源顶点类型的名称,必填
- target_label: 目标顶点类型的名称,必填
- frequency:两个点之间是否可以有多条边,可以取值SINGLE和MULTIPLE,非必填,默认值SINGLE
- properties: 边类型关联的属性类型,选填
- sort_keys: 当允许关联多次时,指定区分键属性列表
- nullable_keys:可为空的属性,选填,默认可为空
- enable_label_index: 是否开启类型索引,默认关闭
1.4.1 创建一个EdgeLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/edgelabels
-
Request Body
{
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.4 - EdgeLabel API
1.4 EdgeLabel
假设已经创建好了1.2.3中的 PropertyKeys 和 1.3.3中的 VertexLabels
Params说明
- name:顶点类型名称,必填
- source_label: 源顶点类型的名称,必填
- target_label: 目标顶点类型的名称,必填
- frequency:两个点之间是否可以有多条边,可以取值SINGLE和MULTIPLE,非必填,默认值SINGLE
- properties: 边类型关联的属性类型,选填
- sort_keys: 当允许关联多次时,指定区分键属性列表
- nullable_keys:可为空的属性,选填,默认可为空
- enable_label_index: 是否开启类型索引,默认关闭
1.4.1 创建一个EdgeLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/edgelabels
+
Request Body
{
"name": "created",
"source_label": "person",
"target_label": "software",
@@ -680,8 +680,8 @@
"ttl_start_time": "createdTime",
"user_data": {}
}
-
1.4.2 为已存在的EdgeLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/edgelabels/created?action=append
-
Request Body
{
+
1.4.2 为已存在的EdgeLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/edgelabels/created?action=append
+
Request Body
{
"name": "created",
"properties": [
"weight"
@@ -711,8 +711,8 @@
"enable_label_index": true,
"user_data": {}
}
-
1.4.3 获取所有的EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels
-
Response Status
200
+
1.4.3 获取所有的EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels
+
Response Status
200
Response Body
{
"edgelabels": [
{
@@ -756,8 +756,8 @@
}
]
}
-
1.4.4 根据name获取EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
-
Response Status
200
+
1.4.4 根据name获取EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
+
Response Status
200
Response Body
{
"id": 1,
"sort_keys": [
@@ -780,13 +780,13 @@
"enable_label_index": true,
"user_data": {}
}
-
1.4.5 根据name删除EdgeLabel
删除 EdgeLabel 会导致删除对应的边以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
-
Response Status
202
+
1.4.5 根据name删除EdgeLabel
删除 EdgeLabel 会导致删除对应的边以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.5 - IndexLabel API
1.5 IndexLabel
假设已经创建好了1.1.3中的 PropertyKeys 、1.2.3中的 VertexLabels 以及 1.3.3中的 EdgeLabels
1.5.1 创建一个IndexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/indexlabels
-
Request Body
{
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.5 - IndexLabel API
1.5 IndexLabel
假设已经创建好了1.1.3中的 PropertyKeys 、1.2.3中的 VertexLabels 以及 1.3.3中的 EdgeLabels
1.5.1 创建一个IndexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/indexlabels
+
Request Body
{
"name": "personByCity",
"base_type": "VERTEX_LABEL",
"base_value": "person",
@@ -809,8 +809,8 @@
},
"task_id": 2
}
-
1.5.2 获取所有的IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels
-
Response Status
200
+
1.5.2 获取所有的IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels
+
Response Status
200
Response Body
{
"indexlabels": [
{
@@ -856,8 +856,8 @@
}
]
}
-
1.5.3 根据name获取IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
-
Response Status
200
+
1.5.3 根据name获取IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
+
Response Status
200
Response Body
{
"id": 1,
"base_type": "VERTEX_LABEL",
@@ -868,28 +868,28 @@
],
"index_type": "SECONDARY"
}
-
1.5.4 根据name删除IndexLabel
删除 IndexLabel 会导致删除相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
-
Response Status
202
+
1.5.4 根据name删除IndexLabel
删除 IndexLabel 会导致删除相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6 - Rebuild API
1.6 Rebuild
1.6.1 重建IndexLabel
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/indexlabels/personByCity
-
Response Status
202
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6 - Rebuild API
1.6 Rebuild
1.6.1 重建IndexLabel
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/indexlabels/personByCity
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.2 VertexLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/vertexlabels/person
-
Response Status
202
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.2 VertexLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/vertexlabels/person
+
Response Status
202
Response Body
{
"task_id": 2
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2
(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.3 EdgeLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/edgelabels/created
-
Response Status
202
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2
(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.3 EdgeLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/edgelabels/created
+
Response Status
202
Response Body
{
"task_id": 3
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/3
(其中"3"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.7 - Vertex API
2.1 Vertex
顶点类型中的 Id 策略决定了顶点的 Id 类型,其对应关系如下:
Id_Strategy id type AUTOMATIC number PRIMARY_KEY string CUSTOMIZE_STRING string CUSTOMIZE_NUMBER number CUSTOMIZE_UUID uuid
顶点的 GET/PUT/DELETE
API 中 url 的 id 部分传入的应是带有类型信息的 id 值,这个类型信息用 json 串是否带引号表示,也就是说:
- 当 id 类型为 number 时,url 中的 id 不带引号,形如 xxx/vertices/123456
- 当 id 类型为 string 时,url 中的 id 带引号,形如 xxx/vertices/“123456”
接下来的示例均假设已经创建好了前述的各种 schema 信息
2.1.1 创建一个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices
-
Request Body
{
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/3
(其中"3"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.7 - Vertex API
2.1 Vertex
顶点类型中的 Id 策略决定了顶点的 Id 类型,其对应关系如下:
Id_Strategy id type AUTOMATIC number PRIMARY_KEY string CUSTOMIZE_STRING string CUSTOMIZE_NUMBER number CUSTOMIZE_UUID uuid
顶点的 GET/PUT/DELETE
API 中 url 的 id 部分传入的应是带有类型信息的 id 值,这个类型信息用 json 串是否带引号表示,也就是说:
- 当 id 类型为 number 时,url 中的 id 不带引号,形如 xxx/vertices/123456
- 当 id 类型为 string 时,url 中的 id 带引号,形如 xxx/vertices/“123456”
接下来的示例均假设已经创建好了前述的各种 schema 信息
2.1.1 创建一个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices
+
Request Body
{
"label": "person",
"properties": {
"name": "marko",
@@ -916,8 +916,8 @@
]
}
}
-
2.1.2 创建多个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices/batch
-
Request Body
[
+
2.1.2 创建多个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices/batch
+
Request Body
[
{
"label": "person",
"properties": {
@@ -939,8 +939,8 @@
"1:marko",
"2:ripple"
]
-
2.1.3 更新顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=append
-
Request Body
{
+
2.1.3 更新顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=append
+
Request Body
{
"label": "person",
"properties": {
"age": 30,
@@ -1002,8 +1002,8 @@
}
]
}
-
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/batch
-
Request Body
{
+
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/batch
+
Request Body
{
"vertices":[
{
"label":"software",
@@ -1067,8 +1067,8 @@
}
]
}
-
结果分析:
- lang 属性未指定更新策略,直接用新值覆盖旧值,无论新值是否为null;
- price 属性指定 BIGGER 的更新策略,旧属性值为328,新属性值为299,所以仍然保留了旧属性值328;
- age 属性指定 OVERRIDE 更新策略,而新属性值中未传入age,相当于age为null,所以仍然保留了原属性值32;
- city 属性也指定了 OVERRIDE 更新策略,且新属性值不为null,所以覆盖了旧值;
- weight 属性指定了 SUM 更新策略,旧属性值为0.1,新属性值为0.2,最后的值为0.3;
- hobby 属性(基数为Set)指定了 UNION 更新策略,所以新值与旧值取了并集;
其他的更新策略使用方式可以类推,不再赘述。
2.1.5 删除顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=eliminate
-
Request Body
{
+
结果分析:
- lang 属性未指定更新策略,直接用新值覆盖旧值,无论新值是否为null;
- price 属性指定 BIGGER 的更新策略,旧属性值为328,新属性值为299,所以仍然保留了旧属性值328;
- age 属性指定 OVERRIDE 更新策略,而新属性值中未传入age,相当于age为null,所以仍然保留了原属性值32;
- city 属性也指定了 OVERRIDE 更新策略,且新属性值不为null,所以覆盖了旧值;
- weight 属性指定了 SUM 更新策略,旧属性值为0.1,新属性值为0.2,最后的值为0.3;
- hobby 属性(基数为Set)指定了 UNION 更新策略,所以新值与旧值取了并集;
其他的更新策略使用方式可以类推,不再赘述。
2.1.5 删除顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=eliminate
+
Request Body
{
"label": "person",
"properties": {
"city": "Beijing"
@@ -1094,8 +1094,8 @@
]
}
}
-
2.1.6 获取符合条件的顶点
Params
- label: 顶点类型
- properties: 属性键值对(根据属性查询的前提是预先建立了索引)
- limit: 查询最大数目
- page: 页号
以上参数都是可选的,如果提供page参数,必须提供limit参数,不允许带其他参数。label, properties
和limit
可以任意组合。
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"age":29}
,范围匹配时形如properties={"age":"P.gt(29)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的顶点 P.neq(number) 属性值不等于number的顶点 P.lt(number) 属性值小于number的顶点 P.lte(number) 属性值小于等于number的顶点 P.gt(number) 属性值大于number的顶点 P.gte(number) 属性值大于等于number的顶点 P.between(number1,number2) 属性值大于等于number1且小于number2的顶点 P.inside(number1,number2) 属性值大于number1且小于number2的顶点 P.outside(number1,number2) 属性值小于number1且大于number2的顶点 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的顶点
查询所有 age 为 20 且 label 为 person 的顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?label=person&properties={"age":29}&limit=1
-
Response Status
200
+
2.1.6 获取符合条件的顶点
Params
- label: 顶点类型
- properties: 属性键值对(根据属性查询的前提是预先建立了索引)
- limit: 查询最大数目
- page: 页号
以上参数都是可选的,如果提供page参数,必须提供limit参数,不允许带其他参数。label, properties
和limit
可以任意组合。
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"age":29}
,范围匹配时形如properties={"age":"P.gt(29)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的顶点 P.neq(number) 属性值不等于number的顶点 P.lt(number) 属性值小于number的顶点 P.lte(number) 属性值小于等于number的顶点 P.gt(number) 属性值大于number的顶点 P.gte(number) 属性值大于等于number的顶点 P.between(number1,number2) 属性值大于等于number1且小于number2的顶点 P.inside(number1,number2) 属性值大于number1且小于number2的顶点 P.outside(number1,number2) 属性值小于number1且大于number2的顶点 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的顶点
查询所有 age 为 20 且 label 为 person 的顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?label=person&properties={"age":29}&limit=1
+
Response Status
200
Response Body
{
"vertices": [
{
@@ -1125,8 +1125,8 @@
}
]
}
-
分页查询所有顶点,获取第一页(page不带参数值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3
-
Response Status
200
+
分页查询所有顶点,获取第一页(page不带参数值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3
+
Response Status
200
Response Body
{
"vertices": [{
"id": "2:ripple",
@@ -1189,8 +1189,8 @@
"page": "001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004"
}
返回的body里面是带有下一页的页号信息的,"page": "001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004"
,
-在查询下一页的时候将该值赋给page参数。
分页查询所有顶点,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page=001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004&limit=3
-
Response Status
200
+在查询下一页的时候将该值赋给page参数。分页查询所有顶点,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page=001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004&limit=3
+
Response Status
200
Response Body
{
"vertices": [{
"id": "1:josh",
@@ -1252,8 +1252,8 @@
],
"page": null
}
-
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.1.7 根据Id获取顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
-
Response Status
200
+
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.1.7 根据Id获取顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
+
Response Status
200
Response Body
{
"id": "1:marko",
"label": "person",
@@ -1273,13 +1273,13 @@
]
}
}
-
2.1.8 根据Id删除顶点
Params
- label: 顶点类型,可选参数
仅根据Id删除顶点
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
-
Response Status
204
-
根据Label+Id删除顶点
通过指定Label参数和Id来删除顶点时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
-
Response Status
204
+
2.1.8 根据Id删除顶点
Params
- label: 顶点类型,可选参数
仅根据Id删除顶点
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
+
Response Status
204
+
根据Label+Id删除顶点
通过指定Label参数和Id来删除顶点时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
+
Response Status
204
1.8 - Edge API
2.2 Edge
顶点 id 格式的修改也影响到了边的 Id 以及源顶点和目标顶点 id 的格式。
EdgeId是由 src-vertex-id + direction + label + sort-values + tgt-vertex-id
拼接而成,
-但是这里的顶点id类型不是通过引号区分的,而是根据前缀区分:
- 当 id 类型为 number 时,EdgeId 的顶点 id 前有一个前缀
L
,形如 “L123456>1»L987654” - 当 id 类型为 string 时,EdgeId 的顶点 id 前有一个前缀
S
,形如 “S1:peter>1»S2:lop”
接下来的示例均假设已经创建好了前述的各种schema和vertex信息
2.2.1 创建一条边
Params说明
- label:边类型名称,必填
- outV:源顶点id,必填
- inV:目标顶点id,必填
- outVLabel:源顶点类型。必填
- inVLabel:目标顶点类型。必填
- properties: 边关联的属性,对象内部结构为:
- name:属性名称
- value:属性值
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges
-
Request Body
{
+但是这里的顶点id类型不是通过引号区分的,而是根据前缀区分:- 当 id 类型为 number 时,EdgeId 的顶点 id 前有一个前缀
L
,形如 “L123456>1»L987654” - 当 id 类型为 string 时,EdgeId 的顶点 id 前有一个前缀
S
,形如 “S1:peter>1»S2:lop”
接下来的示例均假设已经创建好了前述的各种schema和vertex信息
2.2.1 创建一条边
Params说明
- label:边类型名称,必填
- outV:源顶点id,必填
- inV:目标顶点id,必填
- outVLabel:源顶点类型。必填
- inVLabel:目标顶点类型。必填
- properties: 边关联的属性,对象内部结构为:
- name:属性名称
- value:属性值
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges
+
Request Body
{
"label": "created",
"outV": "1:peter",
"inV": "2:lop",
@@ -1304,8 +1304,8 @@
"weight": 0.2
}
}
-
2.2.2 创建多条边
Params
- check_vertex: 是否检查顶点存在(true | false),当设置为 true 而待插入边的源顶点或目标顶点不存在时会报错。
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges/batch
-
Request Body
[
+
2.2.2 创建多条边
Params
- check_vertex: 是否检查顶点存在(true | false),当设置为 true 而待插入边的源顶点或目标顶点不存在时会报错。
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges/batch
+
Request Body
[
{
"label": "created",
"outV": "1:peter",
@@ -1334,8 +1334,8 @@
"S1:peter>1>>S2:lop",
"S1:marko>2>>S1:vadas"
]
-
2.2.3 更新边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=append
-
Request Body
{
+
2.2.3 更新边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=append
+
Request Body
{
"properties": {
"weight": 1.0
}
@@ -1384,8 +1384,8 @@
}
]
}
-
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/edges/batch
-
Request Body
{
+
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/edges/batch
+
Request Body
{
"edges":[
{
"id":"S1:josh>2>>S2:ripple",
@@ -1450,8 +1450,8 @@
}
]
}
-
2.2.5 删除边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=eliminate
-
Request Body
{
+
2.2.5 删除边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=eliminate
+
Request Body
{
"properties": {
"weight": 1.0
}
@@ -1470,8 +1470,8 @@
}
}
2.2.6 获取符合条件的边
Params
- vertex_id: 顶点id
- direction: 边的方向(OUT | IN | BOTH)
- label: 边的标签
- properties: 属性键值对(根据属性查询的前提是预先建立了索引)
- offset:偏移,默认为0
- limit: 查询数目,默认为100
- page: 页号
支持的查询有以下几种:
- 提供vertex_id参数时,不可以使用参数page,direction、label、properties可选,offset和limit可以
-限制结果范围
- 不提供vertex_id参数时,label和properties可选
- 如果使用page参数,则:offset参数不可用(不填或者为0),direction不可用,properties最多只能有一个
- 如果不使用page参数,则:offset和limit可以用来限制结果范围,direction参数忽略
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"weight":0.8}
,范围匹配时形如properties={"age":"P.gt(0.8)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的边 P.neq(number) 属性值不等于number的边 P.lt(number) 属性值小于number的边 P.lte(number) 属性值小于等于number的边 P.gt(number) 属性值大于number的边 P.gte(number) 属性值大于等于number的边 P.between(number1,number2) 属性值大于等于number1且小于number2的边 P.inside(number1,number2) 属性值大于number1且小于number2的边 P.outside(number1,number2) 属性值小于number1且大于number2的边 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的边
查询与顶点 person:josh(vertex_id=“1:josh”) 相连且 label 为 created 的边
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?vertex_id="1:josh"&direction=BOTH&label=created&properties={}
-
Response Status
200
+限制结果范围- 不提供vertex_id参数时,label和properties可选
- 如果使用page参数,则:offset参数不可用(不填或者为0),direction不可用,properties最多只能有一个
- 如果不使用page参数,则:offset和limit可以用来限制结果范围,direction参数忽略
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"weight":0.8}
,范围匹配时形如properties={"age":"P.gt(0.8)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的边 P.neq(number) 属性值不等于number的边 P.lt(number) 属性值小于number的边 P.lte(number) 属性值小于等于number的边 P.gt(number) 属性值大于number的边 P.gte(number) 属性值大于等于number的边 P.between(number1,number2) 属性值大于等于number1且小于number2的边 P.inside(number1,number2) 属性值大于number1且小于number2的边 P.outside(number1,number2) 属性值小于number1且大于number2的边 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的边
查询与顶点 person:josh(vertex_id=“1:josh”) 相连且 label 为 created 的边
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?vertex_id="1:josh"&direction=BOTH&label=created&properties={}
+
Response Status
200
Response Body
{
"edges": [
{
@@ -1502,8 +1502,8 @@
}
]
}
-
分页查询所有边,获取第一页(page不带参数值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page&limit=3
-
Response Status
200
+
分页查询所有边,获取第一页(page不带参数值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page&limit=3
+
Response Status
200
Response Body
{
"edges": [{
"id": "S1:peter>2>>S2:lop",
@@ -1548,8 +1548,8 @@
"page": "002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004"
}
返回的body里面是带有下一页的页号信息的,"page": "002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004"
,
-在查询下一页的时候将该值赋给page参数。
分页查询所有边,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page=002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004&limit=3
-
Response Status
200
+在查询下一页的时候将该值赋给page参数。分页查询所有边,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page=002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004&limit=3
+
Response Status
200
Response Body
{
"edges": [{
"id": "S1:marko>1>20130220>S1:josh",
@@ -1593,8 +1593,8 @@
],
"page": null
}
-
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.2.7 根据Id获取边
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
-
Response Status
200
+
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.2.7 根据Id获取边
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
+
Response Status
200
Response Body
{
"id": "S1:peter>1>>S2:lop",
"label": "created",
@@ -1608,10 +1608,10 @@
"weight": 0.2
}
}
-
2.2.8 根据Id删除边
Params
- label: 边类型,可选参数
仅根据Id删除边
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
-
Response Status
204
-
根据Label+Id删除边
通过指定Label参数和Id来删除边时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
-
Response Status
204
+
2.2.8 根据Id删除边
Params
- label: 边类型,可选参数
仅根据Id删除边
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
+
Response Status
204
+
根据Label+Id删除边
通过指定Label参数和Id来删除边时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
+
Response Status
204
1.9 - Traverser API
3.1 traverser API概述
HugeGraphServer为HugeGraph图数据库提供了RESTful API接口。除了顶点和边的CRUD基本操作以外,还提供了一些遍历(traverser)方法,我们称为traverser API
。这些遍历方法实现了一些复杂的图算法,方便用户对图进行分析和挖掘。
HugeGraph支持的Traverser API包括:
- K-out API,根据起始顶点,查找恰好N步可达的邻居,分为基础版和高级版:
- 基础版使用GET方法,根据起始顶点,查找恰好N步可达的邻居
- 高级版使用POST方法,根据起始顶点,查找恰好N步可达的邻居,与基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
- K-neighbor API,根据起始顶点,查找N步以内可达的所有邻居,分为基础版和高级版:
- 基础版使用GET方法,根据起始顶点,查找N步以内可达的所有邻居
- 高级版使用POST方法,根据起始顶点,查找N步以内可达的所有邻居,与基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
- Same Neighbors, 查询两个顶点的共同邻居
- Jaccard Similarity API,计算jaccard相似度,包括两种:
- 一种是使用GET方法,计算两个顶点的邻居的相似度(交并比)
- 一种是使用POST方法,在全图中查找与起点的jaccard similarity最高的N个点
- Shortest Path API,查找两个顶点之间的最短路径
- All Shortest Paths,查找两个顶点间的全部最短路径
- Weighted Shortest Path,查找起点到目标点的带权最短路径
- Single Source Shortest Path,查找一个点到其他各个点的加权最短路径
- Multi Node Shortest Path,查找指定顶点集之间两两最短路径
- Paths API,查找两个顶点间的全部路径,分为基础版和高级版:
- 基础版使用GET方法,根据起点和终点,查找两个顶点间的全部路径
- 高级版使用POST方法,根据一组起点和一组终点,查找两个集合间符合条件的全部路径
- Customized Paths API,从一批顶点出发,按(一种)模式遍历经过的全部路径
- Template Path API,指定起点和终点以及起点和终点间路径信息,查找符合的路径
- Crosspoints API,查找两个顶点的交点(共同祖先或者共同子孙)
- Customized Crosspoints API,从一批顶点出发,按多种模式遍历,最后一步到达的顶点的交点
- Rings API,从起始顶点出发,可到达的环路路径
- Rays API,从起始顶点出发,可到达边界的路径(即无环路径)
- Fusiform Similarity API,查找一个顶点的梭形相似点
- Vertices API
- 按ID批量查询顶点;
- 获取顶点的分区;
- 按分区查询顶点;
- Edges API
- 按ID批量查询边;
- 获取边的分区;
- 按分区查询边;
3.2. traverser API详解
使用方法中的例子,都是基于TinkerPop官网给出的图:
数据导入程序如下:
public class Loader {
public static void main(String[] args) {
HugeClient client = new HugeClient("http://127.0.0.1:8080", "hugegraph");
@@ -1719,28 +1719,28 @@
peter.addEdge("created", lop, "date", "20170324", "weight", 0.2);
}
}
-
顶点ID为:
"2:ripple",
-"1:vadas",
-"1:peter",
-"1:josh",
-"1:marko",
-"2:lop"
-
边ID为:
"S1:peter>2>>S2:lop",
-"S1:josh>2>>S2:lop",
-"S1:josh>2>>S2:ripple",
-"S1:marko>1>20130220>S1:josh",
-"S1:marko>1>20160110>S1:vadas",
-"S1:marko>2>>S2:lop"
-
3.2.1 K-out API(GET,基础版)
3.2.1.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找从起始顶点出发恰好depth步可达的顶点
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.1.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kout?source="1:marko"&max_depth=2
-
Response Status
200
+
顶点ID为:
"2:ripple",
+"1:vadas",
+"1:peter",
+"1:josh",
+"1:marko",
+"2:lop"
+
边ID为:
"S1:peter>2>>S2:lop",
+"S1:josh>2>>S2:lop",
+"S1:josh>2>>S2:ripple",
+"S1:marko>1>20130220>S1:josh",
+"S1:marko>1>20160110>S1:vadas",
+"S1:marko>2>>S2:lop"
+
3.2.1 K-out API(GET,基础版)
3.2.1.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找从起始顶点出发恰好depth步可达的顶点
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.1.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kout?source="1:marko"&max_depth=2
+
Response Status
200
Response Body
{
"vertices":[
"2:ripple",
"1:peter"
]
}
-
3.2.1.3 适用场景
查找恰好N步关系可达的顶点。两个例子:
- 家族关系中,查找一个人的所有孙子,person A通过连续的两条“儿子”边到达的顶点集合。
- 社交关系中发现潜在好友,例如:与目标用户相隔两层朋友关系的用户,可以通过连续两条“朋友”边到达的顶点。
3.2.2 K-out API(POST,高级版)
3.2.2.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发恰好depth步可达的顶点。
与K-out基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kout
-
Request Body
{
+
3.2.1.3 适用场景
查找恰好N步关系可达的顶点。两个例子:
- 家族关系中,查找一个人的所有孙子,person A通过连续的两条“儿子”边到达的顶点集合。
- 社交关系中发现潜在好友,例如:与目标用户相隔两层朋友关系的用户,可以通过连续两条“朋友”边到达的顶点。
3.2.2 K-out API(POST,高级版)
3.2.2.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发恰好depth步可达的顶点。
与K-out基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kout
+
Request Body
{
"source": "1:marko",
"step": {
"direction": "BOTH",
@@ -1828,8 +1828,8 @@
}
]
}
-
3.2.2.3 适用场景
参见3.2.1.3
3.2.3 K-neighbor(GET,基础版)
3.2.3.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找包括起始顶点在内、depth步之内可达的所有顶点
相当于:起始顶点、K-out(1)、K-out(2)、… 、K-out(max_depth)的并集
Params
- source: 起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的顶点的最大数目,也即遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.3.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kneighbor?source=“1:marko”&max_depth=2
-
Response Status
200
+
3.2.2.3 适用场景
参见3.2.1.3
3.2.3 K-neighbor(GET,基础版)
3.2.3.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找包括起始顶点在内、depth步之内可达的所有顶点
相当于:起始顶点、K-out(1)、K-out(2)、… 、K-out(max_depth)的并集
Params
- source: 起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的顶点的最大数目,也即遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.3.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kneighbor?source=“1:marko”&max_depth=2
+
Response Status
200
Response Body
{
"vertices":[
"2:ripple",
@@ -1840,8 +1840,8 @@
"2:lop"
]
}
-
3.2.3.3 适用场景
查找N步以内可达的所有顶点,例如:
- 家族关系中,查找一个人五服以内所有子孙,person A通过连续的5条“亲子”边到达的顶点集合。
- 社交关系中发现好友圈子,例如目标用户通过1条、2条、3条“朋友”边可到达的用户可以组成目标用户的朋友圈子
3.2.4 K-neighbor API(POST,高级版)
3.2.4.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发depth步内可达的所有顶点。
与K-neighbor基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.4.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kneighbor
-
Request Body
{
+
3.2.3.3 适用场景
查找N步以内可达的所有顶点,例如:
- 家族关系中,查找一个人五服以内所有子孙,person A通过连续的5条“亲子”边到达的顶点集合。
- 社交关系中发现好友圈子,例如目标用户通过1条、2条、3条“朋友”边可到达的用户可以组成目标用户的朋友圈子
3.2.4 K-neighbor API(POST,高级版)
3.2.4.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发depth步内可达的所有顶点。
与K-neighbor基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.4.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kneighbor
+
Request Body
{
"source": "1:marko",
"step": {
"direction": "BOTH",
@@ -1970,20 +1970,20 @@
}
]
}
-
3.2.4.3 适用场景
参见3.2.3.3
3.2.5 Same Neighbors
3.2.5.1 功能介绍
查询两个点的共同邻居
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的共同邻居的最大数目,选填项,默认为10000000
3.2.5.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/sameneighbors?vertex=“1:marko”&other="1:josh"
-
Response Status
200
+
3.2.4.3 适用场景
参见3.2.3.3
3.2.5 Same Neighbors
3.2.5.1 功能介绍
查询两个点的共同邻居
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的共同邻居的最大数目,选填项,默认为10000000
3.2.5.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/sameneighbors?vertex=“1:marko”&other="1:josh"
+
Response Status
200
Response Body
{
"same_neighbors":[
"2:lop"
]
}
-
3.2.5.3 适用场景
查找两个顶点的共同邻居:
- 社交关系中发现两个用户的共同粉丝或者共同关注用户
3.2.6 Jaccard Similarity(GET)
3.2.6.1 功能介绍
计算两个顶点的jaccard similarity(两个顶点邻居的交集比上两个顶点邻居的并集)
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
3.2.6.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity?vertex="1:marko"&other="1:josh"
-
Response Status
200
+
3.2.5.3 适用场景
查找两个顶点的共同邻居:
- 社交关系中发现两个用户的共同粉丝或者共同关注用户
3.2.6 Jaccard Similarity(GET)
3.2.6.1 功能介绍
计算两个顶点的jaccard similarity(两个顶点邻居的交集比上两个顶点邻居的并集)
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
3.2.6.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity?vertex="1:marko"&other="1:josh"
+
Response Status
200
Response Body
{
"jaccard_similarity": 0.2
}
-
3.2.6.3 适用场景
用于评估两个点的相似性或者紧密度
3.2.7 Jaccard Similarity(POST)
3.2.7.1 功能介绍
计算与指定顶点的jaccard similarity最大的N个点
jaccard similarity的计算方式为:两个顶点邻居的交集比上两个顶点邻居的并集
Params
- vertex:一个顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- top:返回一个起点的jaccard similarity中最大的top个,选填项,默认为100
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.7.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity
-
Request Body
{
+
3.2.6.3 适用场景
用于评估两个点的相似性或者紧密度
3.2.7 Jaccard Similarity(POST)
3.2.7.1 功能介绍
计算与指定顶点的jaccard similarity最大的N个点
jaccard similarity的计算方式为:两个顶点邻居的交集比上两个顶点邻居的并集
Params
- vertex:一个顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- top:返回一个起点的jaccard similarity中最大的top个,选填项,默认为100
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.7.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity
+
Request Body
{
"vertex": "1:marko",
"step": {
"direction": "BOTH",
@@ -1999,8 +1999,8 @@
"1:peter": 0.3333333333333333,
"1:josh": 0.2
}
-
3.2.7.3 适用场景
用于在图中找出与指定顶点相似性最高的顶点
3.2.8 Shortest Path
3.2.8.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.8.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/shortestpath?source="1:marko"&target="2:ripple"&max_depth=3
-
Response Status
200
+
3.2.7.3 适用场景
用于在图中找出与指定顶点相似性最高的顶点
3.2.8 Shortest Path
3.2.8.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.8.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/shortestpath?source="1:marko"&target="2:ripple"&max_depth=3
+
Response Status
200
Response Body
{
"path":[
"1:marko",
@@ -2008,8 +2008,8 @@
"2:ripple"
]
}
-
3.2.8.3 适用场景
查找两个顶点间的最短路径,例如:
- 社交关系网中,查找两个用户有关系的最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备最短的关联关系
3.2.9 All Shortest Paths
3.2.9.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找两点间所有的最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.9.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/allshortestpaths?source="A"&target="Z"&max_depth=10
-
Response Status
200
+
3.2.8.3 适用场景
查找两个顶点间的最短路径,例如:
- 社交关系网中,查找两个用户有关系的最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备最短的关联关系
3.2.9 All Shortest Paths
3.2.9.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找两点间所有的最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.9.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/allshortestpaths?source="A"&target="Z"&max_depth=10
+
Response Status
200
Response Body
{
"paths":[
{
@@ -2030,8 +2030,8 @@
}
]
}
-
3.2.9.3 适用场景
查找两个顶点间的所有最短路径,例如:
- 社交关系网中,查找两个用户有关系的全部最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备全部的最短关联关系
3.2.10 Weighted Shortest Path
3.2.10.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条带权最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,必填项,必须是数字类型的属性
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.10.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/weightedshortestpath?source="1:marko"&target="2:ripple"&weight="weight"&with_vertex=true
-
Response Status
200
+
3.2.9.3 适用场景
查找两个顶点间的所有最短路径,例如:
- 社交关系网中,查找两个用户有关系的全部最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备全部的最短关联关系
3.2.10 Weighted Shortest Path
3.2.10.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条带权最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,必填项,必须是数字类型的属性
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.10.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/weightedshortestpath?source="1:marko"&target="2:ripple"&weight="weight"&with_vertex=true
+
Response Status
200
Response Body
{
"path": {
"weight": 2.0,
@@ -2074,8 +2074,8 @@
}
]
}
-
3.2.10.3 适用场景
查找两个顶点间的带权最短路径,例如:
- 交通线路中查找从A城市到B城市花钱最少的交通方式
3.2.11 Single Source Shortest Path
3.2.11.1 功能介绍
从一个顶点出发,查找该点到图中其他顶点的最短路径(可选是否带权重)
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,选填项,必须是数字类型的属性,如果不填或者虽然填了但是边没有该属性,则权重为1.0
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:查询到的目标顶点个数,也是返回的最短路径的条数,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.11.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/singlesourceshortestpath?source="1:marko"&with_vertex=true
-
Response Status
200
+
3.2.10.3 适用场景
查找两个顶点间的带权最短路径,例如:
- 交通线路中查找从A城市到B城市花钱最少的交通方式
3.2.11 Single Source Shortest Path
3.2.11.1 功能介绍
从一个顶点出发,查找该点到图中其他顶点的最短路径(可选是否带权重)
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,选填项,必须是数字类型的属性,如果不填或者虽然填了但是边没有该属性,则权重为1.0
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:查询到的目标顶点个数,也是返回的最短路径的条数,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.11.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/singlesourceshortestpath?source="1:marko"&with_vertex=true
+
Response Status
200
Response Body
{
"paths": {
"2:ripple": {
@@ -2179,8 +2179,8 @@
}
]
}
-
3.2.11.3 适用场景
查找从一个点出发到其他顶点的带权最短路径,比如:
- 查找从北京出发到全国其他所有城市的耗时最短的乘车方案
3.2.12 Multi Node Shortest Path
3.2.12.1 功能介绍
查找指定顶点集两两之间的最短路径
Params
- vertices:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.12.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/multinodeshortestpath
-
Request Body
{
+
3.2.11.3 适用场景
查找从一个点出发到其他顶点的带权最短路径,比如:
- 查找从北京出发到全国其他所有城市的耗时最短的乘车方案
3.2.12 Multi Node Shortest Path
3.2.12.1 功能介绍
查找指定顶点集两两之间的最短路径
Params
- vertices:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.12.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/multinodeshortestpath
+
Request Body
{
"vertices": {
"ids": ["382:marko", "382:josh", "382:vadas", "382:peter", "383:lop", "383:ripple"]
},
@@ -2362,8 +2362,8 @@
}
]
}
-
3.2.12.3 适用场景
查找多个点之间的最短路径,比如:
- 查找多个公司和法人之间的最短路径
3.2.13 Paths (GET,基础版)
3.2.13.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找所有路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
3.2.13.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/paths?source="1:marko"&target="1:josh"&max_depth=5
-
Response Status
200
+
3.2.12.3 适用场景
查找多个点之间的最短路径,比如:
- 查找多个公司和法人之间的最短路径
3.2.13 Paths (GET,基础版)
3.2.13.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找所有路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
3.2.13.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/paths?source="1:marko"&target="1:josh"&max_depth=5
+
Response Status
200
Response Body
{
"paths":[
{
@@ -2381,8 +2381,8 @@
}
]
}
-
3.2.13.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.14 Paths (POST,高级版)
3.2.14.1 功能介绍
根据起始顶点、目的顶点、步骤(step)和最大深度等条件查找所有路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.14.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/paths
-
Request Body
{
+
3.2.13.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.14 Paths (POST,高级版)
3.2.14.1 功能介绍
根据起始顶点、目的顶点、步骤(step)和最大深度等条件查找所有路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.14.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/paths
+
Request Body
{
"sources": {
"ids": ["1:marko"]
},
@@ -2420,8 +2420,8 @@
}
]
}
-
3.2.14.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.15 Customized Paths
3.2.15.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- weight_by:根据指定的属性计算边的权重,sort_by不为NONE时有效,与default_weight互斥
- default_weight:当边没有属性作为权重计算值时,采取的默认权重,sort_by不为NONE时有效,与weight_by互斥
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- sample:当需要对某个step的符合条件的边进行采样时设置,-1表示不采样,默认为采样100
- sort_by:根据路径的权重排序,选填项,默认为NONE:
- NONE表示不排序,默认值
- INCR表示按照路径权重的升序排序
- DECR表示按照路径权重的降序排序
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.15.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedpaths
-
Request Body
{
+
3.2.14.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.15 Customized Paths
3.2.15.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- weight_by:根据指定的属性计算边的权重,sort_by不为NONE时有效,与default_weight互斥
- default_weight:当边没有属性作为权重计算值时,采取的默认权重,sort_by不为NONE时有效,与weight_by互斥
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- sample:当需要对某个step的符合条件的边进行采样时设置,-1表示不采样,默认为采样100
- sort_by:根据路径的权重排序,选填项,默认为NONE:
- NONE表示不排序,默认值
- INCR表示按照路径权重的升序排序
- DECR表示按照路径权重的降序排序
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.15.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedpaths
+
Request Body
{
"sources":{
"ids":[
@@ -2548,8 +2548,8 @@
}
]
}
-
3.2.15.3 适用场景
适合查找各种复杂的路径集合,例如:
- 社交网络中,查找看过张艺谋所导演的电影的用户关注的大V的路径(张艺谋—>电影—->用户—>大V)
- 风控网络中,查找多个高风险用户的直系亲属的朋友的路径(高风险用户—>直系亲属—>朋友)
3.2.16 Template Paths
3.2.16.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_times:当前step可以重复的次数,当为N时,表示从起始顶点可以经过当前step 1-N 次
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- with_ring:Boolean值,true表示包含环路;false表示不包含环路,默认为false
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.16.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/templatepaths
-
Request Body
{
+
3.2.15.3 适用场景
适合查找各种复杂的路径集合,例如:
- 社交网络中,查找看过张艺谋所导演的电影的用户关注的大V的路径(张艺谋—>电影—->用户—>大V)
- 风控网络中,查找多个高风险用户的直系亲属的朋友的路径(高风险用户—>直系亲属—>朋友)
3.2.16 Template Paths
3.2.16.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_times:当前step可以重复的次数,当为N时,表示从起始顶点可以经过当前step 1-N 次
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- with_ring:Boolean值,true表示包含环路;false表示不包含环路,默认为false
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.16.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/templatepaths
+
Request Body
{
"sources": {
"ids": [],
"label": "person",
@@ -2668,8 +2668,8 @@
}
]
}
-
3.2.16.3 适用场景
适合查找各种复杂的模板路径,比如personA -(朋友)-> personB -(同学)-> personC,其中"朋友"和"同学"边可以分别是最多3层和4层的情况
3.2.17 Crosspoints
3.2.17.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找相交点
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点到目的顶点的方向, 目的点到起始点是反方向,BOTH时不考虑方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的交点的最大数目,选填项,默认为10
3.2.17.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/crosspoints?source="2:lop"&target="2:ripple"&max_depth=5&direction=IN
-
Response Status
200
+
3.2.16.3 适用场景
适合查找各种复杂的模板路径,比如personA -(朋友)-> personB -(同学)-> personC,其中"朋友"和"同学"边可以分别是最多3层和4层的情况
3.2.17 Crosspoints
3.2.17.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找相交点
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点到目的顶点的方向, 目的点到起始点是反方向,BOTH时不考虑方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的交点的最大数目,选填项,默认为10
3.2.17.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/crosspoints?source="2:lop"&target="2:ripple"&max_depth=5&direction=IN
+
Response Status
200
Response Body
{
"crosspoints":[
{
@@ -2682,8 +2682,8 @@
}
]
}
-
3.2.17.3 适用场景
查找两个顶点的交点及其路径,例如:
- 社交网络中,查找两个用户共同关注的话题或者大V
- 家族关系中,查找共同的祖先
3.2.18 Customized Crosspoints
3.2.18.1 功能介绍
根据一批起始顶点、多种边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径终点的交集
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
path_patterns:表示从起始顶点走过的路径规则,是一组规则的列表。必填项。每个规则是一个PathPattern
- 每个PathPattern是一组Step列表,每个Step结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的路径的最大数目,选填项,默认为10
with_path:true表示返回交点所在的路径,false表示不返回交点所在的路径,选填项,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有交点的完整信息
- false时表示只返回顶点id
3.2.18.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedcrosspoints
-
Request Body
{
+
3.2.17.3 适用场景
查找两个顶点的交点及其路径,例如:
- 社交网络中,查找两个用户共同关注的话题或者大V
- 家族关系中,查找共同的祖先
3.2.18 Customized Crosspoints
3.2.18.1 功能介绍
根据一批起始顶点、多种边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径终点的交集
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
path_patterns:表示从起始顶点走过的路径规则,是一组规则的列表。必填项。每个规则是一个PathPattern
- 每个PathPattern是一组Step列表,每个Step结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的路径的最大数目,选填项,默认为10
with_path:true表示返回交点所在的路径,false表示不返回交点所在的路径,选填项,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有交点的完整信息
- false时表示只返回顶点id
3.2.18.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedcrosspoints
+
Request Body
{
"sources":{
"ids":[
"2:lop",
@@ -2805,8 +2805,8 @@
}
]
}
-
3.2.18.3 适用场景
查询一组顶点通过多种路径在终点有交集的情况。例如:
- 在商品图谱中,多款手机、学习机、游戏机通过不同的低级别的类目路径,最终都属于一级类目的电子设备
3.2.19 Rings
3.2.19.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找可达的环路
例如:1 -> 25 -> 775 -> 14690 -> 25, 其中环路为 25 -> 775 -> 14690 -> 25
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- source_in_ring:环路是否包含起点,选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的可达环路的最大数目,选填项,默认为10
3.2.19.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rings?source="1:marko"&max_depth=2
-
Response Status
200
+
3.2.18.3 适用场景
查询一组顶点通过多种路径在终点有交集的情况。例如:
- 在商品图谱中,多款手机、学习机、游戏机通过不同的低级别的类目路径,最终都属于一级类目的电子设备
3.2.19 Rings
3.2.19.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找可达的环路
例如:1 -> 25 -> 775 -> 14690 -> 25, 其中环路为 25 -> 775 -> 14690 -> 25
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- source_in_ring:环路是否包含起点,选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的可达环路的最大数目,选填项,默认为10
3.2.19.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rings?source="1:marko"&max_depth=2
+
Response Status
200
Response Body
{
"rings":[
{
@@ -2832,8 +2832,8 @@
}
]
}
-
3.2.19.3 适用场景
查询起始顶点可达的环路,例如:
- 风控项目中,查询一个用户可达的循环担保的人或者设备
- 设备关联网络中,发现一个设备周围的循环引用的设备
3.2.20 Rays
3.2.20.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找发散到边界顶点的路径
例如:1 -> 25 -> 775 -> 14690 -> 2289 -> 18379, 其中 18379 为边界顶点,即没有从 18379 发出的边
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的非环路的最大数目,选填项,默认为10
3.2.20.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rays?source="1:marko"&max_depth=2&direction=OUT
-
Response Status
200
+
3.2.19.3 适用场景
查询起始顶点可达的环路,例如:
- 风控项目中,查询一个用户可达的循环担保的人或者设备
- 设备关联网络中,发现一个设备周围的循环引用的设备
3.2.20 Rays
3.2.20.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找发散到边界顶点的路径
例如:1 -> 25 -> 775 -> 14690 -> 2289 -> 18379, 其中 18379 为边界顶点,即没有从 18379 发出的边
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的非环路的最大数目,选填项,默认为10
3.2.20.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rays?source="1:marko"&max_depth=2&direction=OUT
+
Response Status
200
Response Body
{
"rays":[
{
@@ -2864,8 +2864,8 @@
}
]
}
-
3.2.20.3 适用场景
查找起始顶点到某种关系的边界顶点的路径,例如:
- 家族关系中,查找一个人到所有还没有孩子的子孙的路径
- 设备关联网络中,找到某个设备到终端设备的路径
3.2.21 Fusiform Similarity
3.2.21.1 功能介绍
按照条件查询一批顶点对应的"梭形相似点"。当两个顶点跟很多共同的顶点之间有某种关系的时候,我们认为这两个点为"梭形相似点"。举个例子说明"梭形相似点":“读者A"读了100本书,可以定义读过这100本书中的80本以上的读者,是"读者A"的"梭形相似点”
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
label:边的类型,选填项,默认代表所有edge label
direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
min_neighbors:最少邻居数目,邻居数目少于这个阈值时,认为起点不具备"梭形相似点"。比如想要找一个"读者A"读过的书的"梭形相似点",那么min_neighbors
为100时,表示"读者A"至少要读过100本书才可以有"梭形相似点",必填项
alpha:相似度,代表:起点与"梭形相似点"的共同邻居数目占起点的全部邻居数目的比例,必填项
min_similars:“梭形相似点"的最少个数,只有当起点的"梭形相似点"数目大于或等于该值时,才会返回起点及其"梭形相似点”,选填项,默认值为1
top:返回一个起点的"梭形相似点"中相似度最高的top个,必填项,0表示全部
group_property:与min_groups
一起使用,当起点跟其所有的"梭形相似点"某个属性的值有至少min_groups
个不同值时,才会返回该起点及其"梭形相似点"。比如为"读者A"推荐"异地"书友时,需要设置group_property
为读者的"城市"属性,min_group
至少为2,选填项,不填代表不需要根据属性过滤
min_groups:与group_property
一起使用,只有group_property
设置时才有意义
max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的结果数目上限(一个起点及其"梭形相似点"算一个结果),选填项,默认为10
with_intermediary:是否返回起点及其"梭形相似点"共同关联的中间点,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息
- false时表示只返回顶点id
3.2.21.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/fusiformsimilarity
-
Request Body
{
+
3.2.20.3 适用场景
查找起始顶点到某种关系的边界顶点的路径,例如:
- 家族关系中,查找一个人到所有还没有孩子的子孙的路径
- 设备关联网络中,找到某个设备到终端设备的路径
3.2.21 Fusiform Similarity
3.2.21.1 功能介绍
按照条件查询一批顶点对应的"梭形相似点"。当两个顶点跟很多共同的顶点之间有某种关系的时候,我们认为这两个点为"梭形相似点"。举个例子说明"梭形相似点":“读者A"读了100本书,可以定义读过这100本书中的80本以上的读者,是"读者A"的"梭形相似点”
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
label:边的类型,选填项,默认代表所有edge label
direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
min_neighbors:最少邻居数目,邻居数目少于这个阈值时,认为起点不具备"梭形相似点"。比如想要找一个"读者A"读过的书的"梭形相似点",那么min_neighbors
为100时,表示"读者A"至少要读过100本书才可以有"梭形相似点",必填项
alpha:相似度,代表:起点与"梭形相似点"的共同邻居数目占起点的全部邻居数目的比例,必填项
min_similars:“梭形相似点"的最少个数,只有当起点的"梭形相似点"数目大于或等于该值时,才会返回起点及其"梭形相似点”,选填项,默认值为1
top:返回一个起点的"梭形相似点"中相似度最高的top个,必填项,0表示全部
group_property:与min_groups
一起使用,当起点跟其所有的"梭形相似点"某个属性的值有至少min_groups
个不同值时,才会返回该起点及其"梭形相似点"。比如为"读者A"推荐"异地"书友时,需要设置group_property
为读者的"城市"属性,min_group
至少为2,选填项,不填代表不需要根据属性过滤
min_groups:与group_property
一起使用,只有group_property
设置时才有意义
max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的结果数目上限(一个起点及其"梭形相似点"算一个结果),选填项,默认为10
with_intermediary:是否返回起点及其"梭形相似点"共同关联的中间点,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息
- false时表示只返回顶点id
3.2.21.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/fusiformsimilarity
+
Request Body
{
"sources":{
"ids":[],
"label": "person",
@@ -2935,8 +2935,8 @@
}
]
}
-
3.2.21.3 适用场景
查询一组顶点相似度很高的顶点。例如:
- 跟一个读者有类似书单的读者
- 跟一个玩家玩类似游戏的玩家
3.2.22 Vertices
3.2.22.1 根据顶点的id列表,批量查询顶点
Params
- ids:要查询的顶点id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices?ids="1:marko"&ids="2:lop"
-
Response Status
200
+
3.2.21.3 适用场景
查询一组顶点相似度很高的顶点。例如:
- 跟一个读者有类似书单的读者
- 跟一个玩家玩类似游戏的玩家
3.2.22 Vertices
3.2.22.1 根据顶点的id列表,批量查询顶点
Params
- ids:要查询的顶点id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices?ids="1:marko"&ids="2:lop"
+
Response Status
200
Response Body
{
"vertices":[
{
@@ -2991,8 +2991,8 @@
}
]
}
-
3.2.22.2 获取顶点 Shard 信息
通过指定的分片大小split_size,获取顶点分片信息(可以与 3.2.21.3 中的 Scan 配合使用来获取顶点)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/shards?split_size=67108864
-
Response Status
200
+
3.2.22.2 获取顶点 Shard 信息
通过指定的分片大小split_size,获取顶点分片信息(可以与 3.2.21.3 中的 Scan 配合使用来获取顶点)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/shards?split_size=67108864
+
Response Status
200
Response Body
{
"shards":[
{
@@ -3018,8 +3018,8 @@
......
]
}
-
3.2.22.3 根据Shard信息批量获取顶点
通过指定的分片信息批量查询顶点(Shard信息的获取参见 3.2.21.2 Shard)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取顶点时,一页中顶点数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/scan?start=0&end=4294967295
-
Response Status
200
+
3.2.22.3 根据Shard信息批量获取顶点
通过指定的分片信息批量查询顶点(Shard信息的获取参见 3.2.21.2 Shard)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取顶点时,一页中顶点数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/scan?start=0&end=4294967295
+
Response Status
200
Response Body
{
"vertices":[
{
@@ -3174,8 +3174,8 @@
}
]
}
-
3.2.22.4 适用场景
- 按id列表查询顶点,可用于批量查询顶点,比如在path查询到多条路径之后,可以进一步查询某条路径的所有顶点属性。
- 获取分片和按分片查询顶点,可以用来遍历全部顶点
3.2.23 Edges
3.2.23.1 根据边的id列表,批量查询边
Params
- ids:要查询的边id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges?ids="S1:josh>1>>S2:lop"&ids="S1:josh>1>>S2:ripple"
-
Response Status
200
+
3.2.22.4 适用场景
- 按id列表查询顶点,可用于批量查询顶点,比如在path查询到多条路径之后,可以进一步查询某条路径的所有顶点属性。
- 获取分片和按分片查询顶点,可以用来遍历全部顶点
3.2.23 Edges
3.2.23.1 根据边的id列表,批量查询边
Params
- ids:要查询的边id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges?ids="S1:josh>1>>S2:lop"&ids="S1:josh>1>>S2:ripple"
+
Response Status
200
Response Body
{
"edges": [
{
@@ -3206,8 +3206,8 @@
}
]
}
-
3.2.23.2 获取边 Shard 信息
通过指定的分片大小split_size,获取边分片信息(可以与 3.2.22.3 中的 Scan 配合使用来获取边)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/shards?split_size=4294967295
-
Response Status
200
+
3.2.23.2 获取边 Shard 信息
通过指定的分片大小split_size,获取边分片信息(可以与 3.2.22.3 中的 Scan 配合使用来获取边)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/shards?split_size=4294967295
+
Response Status
200
Response Body
{
"shards":[
{
@@ -3237,8 +3237,8 @@
}
]
}
-
3.2.23.3 根据 Shard 信息批量获取边
通过指定的分片信息批量查询边(Shard信息的获取参见 3.2.22.2)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取边时,一页中边数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/scan?start=0&end=3221225469
-
Response Status
200
+
3.2.23.3 根据 Shard 信息批量获取边
通过指定的分片信息批量查询边(Shard信息的获取参见 3.2.22.2)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取边时,一页中边数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/scan?start=0&end=3221225469
+
Response Status
200
Response Body
{
"edges":[
{
@@ -3405,8 +3405,8 @@
}
]
}
-
注意将映射文件中input.path
的值修改为自己本地的路径。
4.2.1.1 功能介绍
适用于二分图,给出所有源顶点相关的其他顶点及其相关性组成的列表。
二分图:也称二部图,是图论里的一种特殊模型,也是一种特殊的网络流。其最大的特点在于,可以将图里的顶点分为两个集合,两个集合之间的点有边相连,但集合内的点之间没有直接关联。
假设有一个用户和物品的二分图,基于随机游走的 PersonalRank 算法步骤如下:
- 选定一个起点用户 u,其初始权重为 1.0,从 Vu 开始游走(有 alpha 的概率走到邻居点,1 - alpha 的概率停留);
- 如果决定向外游走, 那么会选取某一个类型的出边, 例如
rating
来查找共同的打分人:- 那就从当前节点的邻居节点中按照均匀分布随机选择一个,并且按照均匀分布划分权重值;
- 给源顶点补偿权重 1 - alpha;
- 重复步骤2;
- 达到一定步数或达到精度后收敛,得到推荐列表。
Params
必填项:
- source: 源顶点 id
- label: 源点出发的某类边 label,须连接两类不同顶点
选填项:
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,取值区间为 (0, 1], 默认值
0.85
- max_degree: 查询过程中,单个顶点遍历的最大邻接边数目,默认为
10000
- max_depth: 迭代次数,取值区间为 [2, 50], 默认值
5
- with_label:筛选结果中保留哪些结果,可选以下三类, 默认为
BOTH_LABEL
- SAME_LABEL:仅保留与源顶点相同类别的顶点
- OTHER_LABEL:仅保留与源顶点不同类别(二分图的另一端)的顶点
- BOTH_LABEL:同时保留与源顶点相同和相反类别的顶点
- limit: 返回的顶点的最大数目,默认为
100
- max_diff: 提前收敛的精度差, 默认为
0.0001
(后续实现) - sorted:返回的结果是否根据 rank 排序,为 true 时降序排列,反之不排序,默认为
true
4.2.1.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
-
Request Body
{
+
注意将映射文件中input.path
的值修改为自己本地的路径。
4.2.1.1 功能介绍
适用于二分图,给出所有源顶点相关的其他顶点及其相关性组成的列表。
二分图:也称二部图,是图论里的一种特殊模型,也是一种特殊的网络流。其最大的特点在于,可以将图里的顶点分为两个集合,两个集合之间的点有边相连,但集合内的点之间没有直接关联。
假设有一个用户和物品的二分图,基于随机游走的 PersonalRank 算法步骤如下:
- 选定一个起点用户 u,其初始权重为 1.0,从 Vu 开始游走(有 alpha 的概率走到邻居点,1 - alpha 的概率停留);
- 如果决定向外游走, 那么会选取某一个类型的出边, 例如
rating
来查找共同的打分人:- 那就从当前节点的邻居节点中按照均匀分布随机选择一个,并且按照均匀分布划分权重值;
- 给源顶点补偿权重 1 - alpha;
- 重复步骤2;
- 达到一定步数或达到精度后收敛,得到推荐列表。
Params
必填项:
- source: 源顶点 id
- label: 源点出发的某类边 label,须连接两类不同顶点
选填项:
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,取值区间为 (0, 1], 默认值
0.85
- max_degree: 查询过程中,单个顶点遍历的最大邻接边数目,默认为
10000
- max_depth: 迭代次数,取值区间为 [2, 50], 默认值
5
- with_label:筛选结果中保留哪些结果,可选以下三类, 默认为
BOTH_LABEL
- SAME_LABEL:仅保留与源顶点相同类别的顶点
- OTHER_LABEL:仅保留与源顶点不同类别(二分图的另一端)的顶点
- BOTH_LABEL:同时保留与源顶点相同和相反类别的顶点
- limit: 返回的顶点的最大数目,默认为
100
- max_diff: 提前收敛的精度差, 默认为
0.0001
(后续实现) - sorted:返回的结果是否根据 rank 排序,为 true 时降序排列,反之不排序,默认为
true
4.2.1.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
+
Request Body
{
"source": "1:1",
"label": "rating",
"alpha": 0.6,
@@ -3508,8 +3508,8 @@
}
}
4.2.2.1 功能介绍
在一般图结构中,找出每一层与给定起点相关性最高的前 N 个顶点及其相关度,用图的语义理解就是:从起点往外走,
-走到各层各个顶点的概率。
Params
- source: 源顶点 id,必填项
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,必填项,取值区间为 (0, 1]
- steps: 表示从起始顶点走过的路径规则,是一组 Step 的列表,每个 Step 对应结果中的一层,必填项。每个 Step 的结构如下:
- direction:表示边的方向(OUT, IN, BOTH),默认是 BOTH
- labels:边的类型列表,多个边类型取并集
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- top:在结果中每一层只保留权重最高的前 N 个结果,默认为 100,最大值为 1000
- capacity: 遍历过程中最大的访问的顶点数目,选填项,默认为10000000
4.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
-
Request Body
{
+走到各层各个顶点的概率。Params
- source: 源顶点 id,必填项
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,必填项,取值区间为 (0, 1]
- steps: 表示从起始顶点走过的路径规则,是一组 Step 的列表,每个 Step 对应结果中的一层,必填项。每个 Step 的结构如下:
- direction:表示边的方向(OUT, IN, BOTH),默认是 BOTH
- labels:边的类型列表,多个边类型取并集
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- top:在结果中每一层只保留权重最高的前 N 个结果,默认为 100,最大值为 1000
- capacity: 遍历过程中最大的访问的顶点数目,选填项,默认为10000000
4.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
+
Request Body
{
"source":"O",
"steps":[
{
@@ -3567,122 +3567,122 @@
}
]
}
-
4.2.2.3 适用场景
为给定的起点在不同的层中找到最应该推荐的顶点。
- 比如:在观众、朋友、电影、导演的四层图结构中,根据某个观众的朋友们喜欢的电影,为这个观众推荐电影;或者根据这些电影是谁拍的,为其推荐导演。
1.11 - Variable API
5.1 Variables
Variables可以用来存储有关整个图的数据,数据按照键值对的方式存取
5.1.1 创建或者更新某个键值对
Method & Url
PUT http://localhost:8080/graphs/hugegraph/variables/name
-
Request Body
{
+
4.2.2.3 适用场景
为给定的起点在不同的层中找到最应该推荐的顶点。
- 比如:在观众、朋友、电影、导演的四层图结构中,根据某个观众的朋友们喜欢的电影,为这个观众推荐电影;或者根据这些电影是谁拍的,为其推荐导演。
1.11 - Variable API
5.1 Variables
Variables可以用来存储有关整个图的数据,数据按照键值对的方式存取
5.1.1 创建或者更新某个键值对
Method & Url
PUT http://localhost:8080/graphs/hugegraph/variables/name
+
Request Body
{
"data": "tom"
}
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.2 列出全部键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables
-
Response Status
200
+
5.1.2 列出全部键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables
+
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.3 列出某个键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables/name
-
Response Status
200
+
5.1.3 列出某个键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables/name
+
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.4 删除某个键值对
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/variables/name
-
Response Status
204
-
1.12 - Graphs API
6.1 Graphs
6.1.1 列出数据库中全部的图
Method & Url
GET http://localhost:8080/graphs
-
Response Status
200
+
5.1.4 删除某个键值对
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/variables/name
+
Response Status
204
+
1.12 - Graphs API
6.1 Graphs
6.1.1 列出数据库中全部的图
Method & Url
GET http://localhost:8080/graphs
+
Response Status
200
Response Body
{
"graphs": [
"hugegraph",
"hugegraph1"
]
}
-
6.1.2 查看某个图的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph
-
Response Status
200
+
6.1.2 查看某个图的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph
+
Response Status
200
Response Body
{
"name": "hugegraph",
"backend": "cassandra"
}
-
6.1.3 清空某个图的全部数据,包括schema、vertex、edge和index等,该操作需要管理员权限
Params
由于清空图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to delete all data
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/clear?confirm_message=I%27m+sure+to+delete+all+data
-
Response Status
204
-
6.1.4 克隆一个图,该操作需要管理员权限
Params
- clone_graph_name: 已有图的名称;从已有的图来克隆,用户可选择传递配置文件,传递时将替换已有图中的配置;
Method & Url
POST http://localhost:8080/graphs/hugegraph_clone?clone_graph_name=hugegraph
-
Request Body 【可选】
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-backend=rocksdb
-serializer=binary
-store=hugegraph_clone
-rocksdb.data_path=./hg2
-rocksdb.wal_path=./hg2
-
Response Status
200
+
6.1.3 清空某个图的全部数据,包括schema、vertex、edge和index等,该操作需要管理员权限
Params
由于清空图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to delete all data
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/clear?confirm_message=I%27m+sure+to+delete+all+data
+
Response Status
204
+
6.1.4 克隆一个图,该操作需要管理员权限
Params
- clone_graph_name: 已有图的名称;从已有的图来克隆,用户可选择传递配置文件,传递时将替换已有图中的配置;
Method & Url
POST http://localhost:8080/graphs/hugegraph_clone?clone_graph_name=hugegraph
+
Request Body 【可选】
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+backend=rocksdb
+serializer=binary
+store=hugegraph_clone
+rocksdb.data_path=./hg2
+rocksdb.wal_path=./hg2
+
Response Status
200
Response Body
{
"name": "hugegraph_clone",
"backend": "rocksdb"
}
-
6.1.5 创建一个图,该操作需要管理员权限
Method & Url
POST http://localhost:8080/graphs/hugegraph2
-
Request Body
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-backend=rocksdb
-serializer=binary
-store=hugegraph2
-rocksdb.data_path=./hg2
-rocksdb.wal_path=./hg2
-
Response Status
200
+
6.1.5 创建一个图,该操作需要管理员权限
Method & Url
POST http://localhost:8080/graphs/hugegraph2
+
Request Body
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+backend=rocksdb
+serializer=binary
+store=hugegraph2
+rocksdb.data_path=./hg2
+rocksdb.wal_path=./hg2
+
Response Status
200
Response Body
{
"name": "hugegraph2",
"backend": "rocksdb"
}
-
6.1.6 删除某个图及其全部数据
Params
由于删除图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to drop the graph
Method & Url
DELETE http://localhost:8080/graphs/hugegraph_clone?confirm_message=I%27m%20sure%20to%20drop%20the%20graph
-
Response Status
204
-
6.2 Conf
6.2.1 查看某个图的配置,该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/hugegraph/conf
-
Response Status
200
-
Response Body
# gremlin entrence to create graph
-gremlin.graph=com.baidu.hugegraph.HugeFactory
-
-# cache config
-#schema.cache_capacity=1048576
-#graph.cache_capacity=10485760
-#graph.cache_expire=600
-
-# schema illegal name template
-#schema.illegal_name_regex=\s+|~.*
-
-#vertex.default_label=vertex
-
-backend=cassandra
-serializer=cassandra
-
-store=hugegraph
-...
-
6.3 Mode
合法的图模式包括:NONE,RESTORING,MERGING,LOADING
- None 模式(默认),元数据和图数据的写入属于正常状态。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- LOADING:批量导入数据时自动启用,特别的:
- 添加顶点/边时,不会检查必填属性是否传入
Restore 时存在两种不同的模式: Restoring 和 Merging
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
6.3.1 查看某个图的模式.
Method & Url
GET http://localhost:8080/graphs/hugegraph/mode
-
Response Status
200
+
6.1.6 删除某个图及其全部数据
Params
由于删除图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to drop the graph
Method & Url
DELETE http://localhost:8080/graphs/hugegraph_clone?confirm_message=I%27m%20sure%20to%20drop%20the%20graph
+
Response Status
204
+
6.2 Conf
6.2.1 查看某个图的配置,该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/hugegraph/conf
+
Response Status
200
+
Response Body
# gremlin entrence to create graph
+gremlin.graph=com.baidu.hugegraph.HugeFactory
+
+# cache config
+#schema.cache_capacity=1048576
+#graph.cache_capacity=10485760
+#graph.cache_expire=600
+
+# schema illegal name template
+#schema.illegal_name_regex=\s+|~.*
+
+#vertex.default_label=vertex
+
+backend=cassandra
+serializer=cassandra
+
+store=hugegraph
+...
+
6.3 Mode
合法的图模式包括:NONE,RESTORING,MERGING,LOADING
- None 模式(默认),元数据和图数据的写入属于正常状态。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- LOADING:批量导入数据时自动启用,特别的:
- 添加顶点/边时,不会检查必填属性是否传入
Restore 时存在两种不同的模式: Restoring 和 Merging
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
6.3.1 查看某个图的模式.
Method & Url
GET http://localhost:8080/graphs/hugegraph/mode
+
Response Status
200
Response Body
{
"mode": "NONE"
}
-
合法的图模式包括:NONE,RESTORING,MERGING
6.3.2 设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/hugegraph/mode
-
Request Body
"RESTORING"
-
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
+
合法的图模式包括:NONE,RESTORING,MERGING
6.3.2 设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/hugegraph/mode
+
Request Body
"RESTORING"
+
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
Response Body
{
"mode": "RESTORING"
}
-
6.3.3 查看某个图的读模式.
Params
- name: 图的名称
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph_read_mode
-
Response Status
200
+
6.3.3 查看某个图的读模式.
Params
- name: 图的名称
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph_read_mode
+
Response Status
200
Response Body
{
"graph_read_mode": "ALL"
}
-
6.3.4 设置某个图的读模式. 该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
-
Request Body
"OLTP_ONLY"
-
合法的图模式包括:ALL,OLTP_ONLY,OLAP_ONLY
Response Status
200
+
6.3.4 设置某个图的读模式. 该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
+
Request Body
"OLTP_ONLY"
+
合法的图模式包括:ALL,OLTP_ONLY,OLAP_ONLY
Response Status
200
Response Body
{
"graph_read_mode": "OLTP_ONLY"
}
-
6.4 Snapshot
6.4.1 创建快照
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_create
-
Response Status
200
+
6.4 Snapshot
6.4.1 创建快照
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_create
+
Response Status
200
Response Body
{
"hugegraph": "snapshot_created"
}
-
6.4.2 快照恢复
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_resume
-
Response Status
200
+
6.4.2 快照恢复
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_resume
+
Response Status
200
Response Body
{
"hugegraph": "snapshot_resumed"
}
-
6.5 Compact
6.5.1 手动压缩图,该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/compact
-
Response Status
200
+
6.5 Compact
6.5.1 手动压缩图,该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/compact
+
Response Status
200
Response Body
{
"nodes": 1,
"cluster_id": "local",
@@ -3690,8 +3690,8 @@
"local": "OK"
}
}
-
1.13 - Task API
7.1 Task
7.1.1 列出某个图中全部的异步任务
Params
- status: 异步任务的状态
- limit:返回异步任务数目上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks?status=success
-
Response Status
200
+
1.13 - Task API
7.1 Task
7.1.1 列出某个图中全部的异步任务
Params
- status: 异步任务的状态
- limit:返回异步任务数目上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks?status=success
+
Response Status
200
Response Body
{
"tasks": [{
"task_name": "hugegraph.traversal().V()",
@@ -3707,8 +3707,8 @@
"task_input": "{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}]
}
-
7.1.2 查看某个异步任务的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks/2
-
Response Status
200
+
7.1.2 查看某个异步任务的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks/2
+
Response Status
200
Response Body
{
"task_name": "hugegraph.traversal().V()",
"task_progress": 0,
@@ -3722,8 +3722,8 @@
"task_callable": "com.baidu.hugegraph.api.job.GremlinAPI$GremlinJob",
"task_input": "{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}
-
7.1.3 删除某个异步任务信息,不删除异步任务本身
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/tasks/2
-
Response Status
204
+
7.1.3 删除某个异步任务信息,不删除异步任务本身
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/tasks/2
+
Response Status
204
7.1.4 取消某个异步任务,该异步任务必须具有处理中断的能力
假设已经通过Gremlin API创建了一个异步任务如下:
"for (int i = 0; i < 10; i++) {" +
"hugegraph.addVertex(T.label, 'man');" +
"hugegraph.tx().commit();" +
@@ -3733,13 +3733,13 @@
"break;" +
"}" +
"}"
-
Method & Url
PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
-
请保证在10秒内发送该请求,如果超过10秒发送,任务可能已经执行完成,无法取消。
Response Status
202
+
Method & Url
PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
+
请保证在10秒内发送该请求,如果超过10秒发送,任务可能已经执行完成,无法取消。
Response Status
202
Response Body
{
"cancelled": true
}
-
此时查询 label 为 man 的顶点数目,一定是小于 10 的。
1.14 - Gremlin API
8.1 Gremlin
8.1.1 向HugeGraphServer发送gremlin语句(GET),同步执行
Params
- gremlin: 要发送给
HugeGraphServer
执行的gremlin
语句 - bindings: 用来绑定参数,key是字符串,value是绑定的值(只能是字符串或者数字),功能类似于MySQL的 Prepared Statement,用于加速语句执行
- language: 发送语句的语言类型,默认为
gremlin-groovy
- aliases: 为存在于图空间的已有变量添加别名
查询顶点
Method & Url
GET http://127.0.0.1:8080/gremlin?gremlin=hugegraph.traversal().V('1:marko')
-
Response Status
200
+
此时查询 label 为 man 的顶点数目,一定是小于 10 的。
1.14 - Gremlin API
8.1 Gremlin
8.1.1 向HugeGraphServer发送gremlin语句(GET),同步执行
Params
- gremlin: 要发送给
HugeGraphServer
执行的gremlin
语句 - bindings: 用来绑定参数,key是字符串,value是绑定的值(只能是字符串或者数字),功能类似于MySQL的 Prepared Statement,用于加速语句执行
- language: 发送语句的语言类型,默认为
gremlin-groovy
- aliases: 为存在于图空间的已有变量添加别名
查询顶点
Method & Url
GET http://127.0.0.1:8080/gremlin?gremlin=hugegraph.traversal().V('1:marko')
+
Response Status
200
Response Body
{
"requestId": "c6ef47a8-b634-4b07-9d38-6b3b69a3a556",
"status": {
@@ -3770,8 +3770,8 @@
"meta": {}
}
}
-
8.1.2 向HugeGraphServer发送gremlin语句(POST),同步执行
Method & Url
POST http://localhost:8080/gremlin
-
查询顶点
Request Body
{
+
8.1.2 向HugeGraphServer发送gremlin语句(POST),同步执行
Method & Url
POST http://localhost:8080/gremlin
+
查询顶点
Request Body
{
"gremlin": "hugegraph.traversal().V('1:marko')",
"bindings": {},
"language": "gremlin-groovy",
@@ -3845,8 +3845,8 @@
"meta": {}
}
}
-
8.1.3 向HugeGraphServer发送gremlin语句(POST),异步执行
Method & Url
POST http://localhost:8080/graphs/hugegraph/jobs/gremlin
-
查询顶点
Request Body
{
+
8.1.3 向HugeGraphServer发送gremlin语句(POST),异步执行
Method & Url
POST http://localhost:8080/graphs/hugegraph/jobs/gremlin
+
查询顶点
Request Body
{
"gremlin": "g.V('1:marko')",
"bindings": {},
"language": "gremlin-groovy",
@@ -3878,8 +3878,8 @@
"user_phone": "182****9088",
"user_email": "123@xx.com"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/users
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/users
+
Response Status
201
Response Body
返回报文中,密码为加密后的密文
{
"user_password": "******",
"user_email": "123@xx.com",
@@ -3890,11 +3890,11 @@
"id": "-63:boss",
"user_create": "2020-11-17 14:31:07.833"
}
-
9.2.2 删除用户
Params
- id: 需要删除的用户 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/users/-63:test
-
Response Status
204
+
9.2.2 删除用户
Params
- id: 需要删除的用户 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/users/-63:test
+
Response Status
204
Response Body
1
-
9.2.3 修改用户
Params
- id: 需要修改的用户 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test
-
Request Body
修改user_name、user_password和user_phone
{
+
9.2.3 修改用户
Params
- id: 需要修改的用户 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test
+
Request Body
修改user_name、user_password和user_phone
{
"user_name": "test",
"user_password": "******",
"user_phone": "183****9266"
@@ -3909,8 +3909,8 @@
"id": "-63:test",
"user_create": "2020-11-12 10:27:13.601"
}
-
9.2.4 查询用户列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users
-
Response Status
200
+
9.2.4 查询用户列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users
+
Response Status
200
Response Body
{
"users": [
{
@@ -3923,8 +3923,8 @@
}
]
}
-
9.2.5 查询某个用户
Params
- id: 需要查询的用户 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:admin
-
Response Status
200
+
9.2.5 查询某个用户
Params
- id: 需要查询的用户 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:admin
+
Response Status
200
Response Body
{
"users": [
{
@@ -3937,8 +3937,8 @@
}
]
}
-
9.2.6 查询某个用户的角色
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:boss/role
-
Response Status
200
+
9.2.6 查询某个用户的角色
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:boss/role
+
Response Status
200
Response Body
{
"roles": {
"hugegraph": {
@@ -3956,8 +3956,8 @@
"group_name": "all",
"group_description": "group can do anything"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/groups
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/groups
+
Response Status
201
Response Body
{
"group_creator": "admin",
"group_name": "all",
@@ -3966,11 +3966,11 @@
"id": "-69:all",
"group_description": "group can do anything"
}
-
9.3.2 删除用户组
Params
- id: 需要删除的用户组 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
-
Response Status
204
+
9.3.2 删除用户组
Params
- id: 需要删除的用户组 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
+
Response Status
204
Response Body
1
-
9.3.3 修改用户组
Params
- id: 需要修改的用户组 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
-
Request Body
修改group_description
{
+
9.3.3 修改用户组
Params
- id: 需要修改的用户组 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
+
Request Body
修改group_description
{
"group_name": "grant",
"group_description": "grant"
}
@@ -3983,8 +3983,8 @@
"id": "-69:grant",
"group_description": "grant"
}
-
9.3.4 查询用户组列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups
-
Response Status
200
+
9.3.4 查询用户组列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups
+
Response Status
200
Response Body
{
"groups": [
{
@@ -3997,8 +3997,8 @@
}
]
}
-
9.3.5 查询某个用户组
Params
- id: 需要查询的用户组 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all
-
Response Status
200
+
9.3.5 查询某个用户组
Params
- id: 需要查询的用户组 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all
+
Response Status
200
Response Body
{
"group_creator": "admin",
"group_name": "all",
@@ -4019,8 +4019,8 @@
}
]
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/targets
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/targets
+
Response Status
201
Response Body
{
"target_creator": "admin",
"target_name": "all",
@@ -4037,11 +4037,11 @@
"id": "-77:all",
"target_update": "2020-11-11 15:32:01.192"
}
-
9.4.2 删除资源
Params
- id: 需要删除的资源 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
-
Response Status
204
+
9.4.2 删除资源
Params
- id: 需要删除的资源 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
+
Response Status
204
Response Body
1
-
9.4.3 修改资源
Params
- id: 需要修改的资源 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
-
Request Body
修改资源定义中的type
{
+
9.4.3 修改资源
Params
- id: 需要修改的资源 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
+
Request Body
修改资源定义中的type
{
"target_name": "gremlin",
"target_graph": "hugegraph",
"target_url": "127.0.0.1:8080",
@@ -4068,8 +4068,8 @@
"id": "-77:gremlin",
"target_update": "2020-11-12 09:37:12.780"
}
-
9.4.4 查询资源列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets
-
Response Status
200
+
9.4.4 查询资源列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets
+
Response Status
200
Response Body
{
"targets": [
{
@@ -4106,8 +4106,8 @@
}
]
}
-
9.4.5 查询某个资源
Params
- id: 需要查询的资源 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets/-77:grant
-
Response Status
200
+
9.4.5 查询某个资源
Params
- id: 需要查询的资源 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets/-77:grant
+
Response Status
200
Response Body
{
"target_creator": "admin",
"target_name": "grant",
@@ -4128,8 +4128,8 @@
"user": "-63:boss",
"group": "-69:all"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/belongs
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/belongs
+
Response Status
201
Response Body
{
"belong_create": "2020-11-11 16:19:35.422",
"belong_creator": "admin",
@@ -4138,11 +4138,11 @@
"user": "-63:boss",
"group": "-69:all"
}
-
9.5.2 删除关联角色
Params
- id: 需要删除的关联角色 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
-
Response Status
204
+
9.5.2 删除关联角色
Params
- id: 需要删除的关联角色 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
+
Response Status
204
Response Body
1
-
9.5.3 修改关联角色
关联角色只能修改描述,不能修改 user 和 group 属性,如果需要修改关联角色,需要删除原来关联关系,新增关联角色。
Params
- id: 需要修改的关联角色 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
-
Request Body
修改belong_description
{
+
9.5.3 修改关联角色
关联角色只能修改描述,不能修改 user 和 group 属性,如果需要修改关联角色,需要删除原来关联关系,新增关联角色。
Params
- id: 需要修改的关联角色 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
+
Request Body
修改belong_description
{
"belong_description": "update test"
}
Response Status
200
@@ -4155,8 +4155,8 @@
"user": "-63:boss",
"group": "-69:grant"
}
-
9.5.4 查询关联角色列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs
-
Response Status
200
+
9.5.4 查询关联角色列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs
+
Response Status
200
Response Body
{
"belongs": [
{
@@ -4169,8 +4169,8 @@
}
]
}
-
9.5.5 查看某个关联角色
Params
- id: 需要查询的关联角色 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all
-
Response Status
200
+
9.5.5 查看某个关联角色
Params
- id: 需要查询的关联角色 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all
+
Response Status
200
Response Body
{
"belong_create": "2020-11-11 16:19:35.422",
"belong_creator": "admin",
@@ -4184,8 +4184,8 @@
"target": "-77:all",
"access_permission": "READ"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/accesses
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/accesses
+
Response Status
201
Response Body
{
"access_permission": "READ",
"access_create": "2020-11-11 15:54:54.008",
@@ -4195,11 +4195,11 @@
"group": "-69:all",
"target": "-77:all"
}
-
9.6.2 删除赋权
Params
- id: 需要删除的赋权 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
-
Response Status
204
+
9.6.2 删除赋权
Params
- id: 需要删除的赋权 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
+
Response Status
204
Response Body
1
-
9.6.3 修改赋权
赋权只能修改描述,不能修改用户组、资源和权限许可,如果需要修改赋权的关系,可以删除原来的赋权关系,新增赋权。
Params
- id: 需要修改的赋权 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
-
Request Body
修改access_description
{
+
9.6.3 修改赋权
赋权只能修改描述,不能修改用户组、资源和权限许可,如果需要修改赋权的关系,可以删除原来的赋权关系,新增赋权。
Params
- id: 需要修改的赋权 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
+
Request Body
修改access_description
{
"access_description": "test"
}
Response Status
200
@@ -4213,8 +4213,8 @@
"group": "-69:all",
"target": "-77:all"
}
-
9.6.4 查询赋权列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses
-
Response Status
200
+
9.6.4 查询赋权列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses
+
Response Status
200
Response Body
{
"accesses": [
{
@@ -4228,8 +4228,8 @@
}
]
}
-
9.6.5 查询某个赋权
Params
- id: 需要查询的赋权 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>11>S-77:all
-
Response Status
200
+
9.6.5 查询某个赋权
Params
- id: 需要查询的赋权 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>11>S-77:all
+
Response Status
200
Response Body
{
"access_permission": "READ",
"access_create": "2020-11-11 15:54:54.008",
@@ -4239,8 +4239,8 @@
"group": "-69:all",
"target": "-77:all"
}
-
1.16 - Other API
10.1 Other
10.1.1 查看HugeGraph的版本信息
Method & Url
GET http://localhost:8080/versions
-
Response Status
200
+
1.16 - Other API
10.1 Other
10.1.1 查看HugeGraph的版本信息
Method & Url
GET http://localhost:8080/versions
+
Response Status
200
Response Body
{
"versions": {
"version": "v1",
diff --git a/cn/docs/clients/restful-api/_print/index.html b/cn/docs/clients/restful-api/_print/index.html
index e4a35628a..0380e2cfa 100644
--- a/cn/docs/clients/restful-api/_print/index.html
+++ b/cn/docs/clients/restful-api/_print/index.html
@@ -3,22 +3,23 @@
">
This is the multi-page printable view of this section.
Click here to print.
HugeGraph RESTful API
- 1: Schema API
- 2: PropertyKey API
- 3: VertexLabel API
- 4: EdgeLabel API
- 5: IndexLabel API
- 6: Rebuild API
- 7: Vertex API
- 8: Edge API
- 9: Traverser API
- 10: Rank API
- 11: Variable API
- 12: Graphs API
- 13: Task API
- 14: Gremlin API
- 15: Authentication API
- 16: Other API
HugeGraph-Server通过HugeGraph-API基于HTTP协议为Client提供操作图的接口,主要包括元数据和
-图数据的增删改查,遍历算法,变量,图操作及其他操作。
1 - Schema API
1.1 Schema
HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema
-
Response Status
200
+图数据的增删改查,遍历算法,变量,图操作及其他操作。
1 - Schema API
1.1 Schema
HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。
Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
+
+e.g: GET http://localhost:8080/graphs/hugegraph/schema
+
Response Status
200
Response Body
{
"propertykeys": [
{
"id": 7,
"name": "price",
- "data_type": "INT",
+ "data_type": "DOUBLE",
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.741"
+ "~create_time": "2023-05-08 17:49:05.316"
}
},
{
@@ -28,11 +29,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.729"
+ "~create_time": "2023-05-08 17:49:05.309"
}
},
{
@@ -42,11 +42,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.691"
+ "~create_time": "2023-05-08 17:49:05.287"
}
},
{
@@ -56,11 +55,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.678"
+ "~create_time": "2023-05-08 17:49:05.280"
}
},
{
@@ -70,11 +68,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.718"
+ "~create_time": "2023-05-08 17:49:05.301"
}
},
{
@@ -84,11 +81,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.707"
+ "~create_time": "2023-05-08 17:49:05.294"
}
},
{
@@ -98,11 +94,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.609"
+ "~create_time": "2023-05-08 17:49:05.250"
}
}
],
@@ -115,9 +110,11 @@
"name"
],
"nullable_keys": [
- "age"
+ "age",
+ "city"
],
"index_labels": [
+ "personByAge",
"personByCity",
"personByAgeAndCity"
],
@@ -130,19 +127,15 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:40.783"
+ "~create_time": "2023-05-08 17:49:05.336"
}
},
{
"id": 2,
"name": "software",
- "id_strategy": "PRIMARY_KEY",
- "primary_keys": [
- "name"
- ],
- "nullable_keys": [
- "price"
- ],
+ "id_strategy": "CUSTOMIZE_NUMBER",
+ "primary_keys": [],
+ "nullable_keys": [],
"index_labels": [
"softwareByPrice"
],
@@ -155,7 +148,7 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:40.840"
+ "~create_time": "2023-05-08 17:49:05.347"
}
}
],
@@ -165,13 +158,9 @@
"name": "knows",
"source_label": "person",
"target_label": "person",
- "frequency": "MULTIPLE",
- "sort_keys": [
- "date"
- ],
- "nullable_keys": [
- "weight"
- ],
+ "frequency": "SINGLE",
+ "sort_keys": [],
+ "nullable_keys": [],
"index_labels": [
"knowsByWeight"
],
@@ -183,7 +172,7 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:41.840"
+ "~create_time": "2023-05-08 17:49:08.437"
}
},
{
@@ -192,11 +181,8 @@
"source_label": "person",
"target_label": "software",
"frequency": "SINGLE",
- "sort_keys": [
- ],
- "nullable_keys": [
- "weight"
- ],
+ "sort_keys": [],
+ "nullable_keys": [],
"index_labels": [
"createdByDate",
"createdByWeight"
@@ -209,13 +195,27 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:41.868"
+ "~create_time": "2023-05-08 17:49:08.446"
}
}
],
"indexlabels": [
{
"id": 1,
+ "name": "personByAge",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "person",
+ "index_type": "RANGE_INT",
+ "fields": [
+ "age"
+ ],
+ "status": "CREATED",
+ "user_data": {
+ "~create_time": "2023-05-08 17:49:05.375"
+ }
+ },
+ {
+ "id": 2,
"name": "personByCity",
"base_type": "VERTEX_LABEL",
"base_value": "person",
@@ -225,68 +225,68 @@
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.886"
+ "~create_time": "2023-05-08 17:49:06.898"
}
},
{
- "id": 4,
- "name": "createdByDate",
- "base_type": "EDGE_LABEL",
- "base_value": "created",
+ "id": 3,
+ "name": "personByAgeAndCity",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "person",
"index_type": "SECONDARY",
"fields": [
- "date"
+ "age",
+ "city"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.878"
+ "~create_time": "2023-05-08 17:49:07.407"
}
},
{
- "id": 5,
- "name": "createdByWeight",
- "base_type": "EDGE_LABEL",
- "base_value": "created",
+ "id": 4,
+ "name": "softwareByPrice",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "software",
"index_type": "RANGE_DOUBLE",
"fields": [
- "weight"
+ "price"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:42.117"
+ "~create_time": "2023-05-08 17:49:07.916"
}
},
{
- "id": 2,
- "name": "personByAgeAndCity",
- "base_type": "VERTEX_LABEL",
- "base_value": "person",
+ "id": 5,
+ "name": "createdByDate",
+ "base_type": "EDGE_LABEL",
+ "base_value": "created",
"index_type": "SECONDARY",
"fields": [
- "age",
- "city"
+ "date"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.351"
+ "~create_time": "2023-05-08 17:49:08.454"
}
},
{
- "id": 3,
- "name": "softwareByPrice",
- "base_type": "VERTEX_LABEL",
- "base_value": "software",
- "index_type": "RANGE_INT",
+ "id": 6,
+ "name": "createdByWeight",
+ "base_type": "EDGE_LABEL",
+ "base_value": "created",
+ "index_type": "RANGE_DOUBLE",
"fields": [
- "price"
+ "weight"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.587"
+ "~create_time": "2023-05-08 17:49:08.963"
}
},
{
- "id": 6,
+ "id": 7,
"name": "knowsByWeight",
"base_type": "EDGE_LABEL",
"base_value": "knows",
@@ -296,13 +296,13 @@
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:42.376"
+ "~create_time": "2023-05-08 17:49:09.473"
}
}
]
}
-
2 - PropertyKey API
1.2 PropertyKey
Params说明:
- name:属性类型名称,必填
- data_type:属性类型数据类型,包括:bool、byte、int、long、float、double、string、date、uuid、blob,默认string类型
- cardinality:属性类型基数,包括:single、list、set,默认single
请求体字段说明:
- id:属性类型id值
- properties:属性的属性,对于属性而言,此项为空
- user_data:设置属性类型的通用信息,比如可设置age属性的取值范围,最小为0,最大为100;目前此项不做任何校验,只为后期拓展提供预留入口
1.2.1 创建一个 PropertyKey
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/propertykeys
-
Request Body
{
+
2 - PropertyKey API
1.2 PropertyKey
Params说明:
- name:属性类型名称,必填
- data_type:属性类型数据类型,包括:bool、byte、int、long、float、double、string、date、uuid、blob,默认string类型
- cardinality:属性类型基数,包括:single、list、set,默认single
请求体字段说明:
- id:属性类型id值
- properties:属性的属性,对于属性而言,此项为空
- user_data:设置属性类型的通用信息,比如可设置age属性的取值范围,最小为0,最大为100;目前此项不做任何校验,只为后期拓展提供预留入口
1.2.1 创建一个 PropertyKey
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/propertykeys
+
Request Body
{
"name": "age",
"data_type": "INT",
"cardinality": "SINGLE"
@@ -324,8 +324,8 @@
},
"task_id": 0
}
-
1.2.2 为已存在的 PropertyKey 添加或移除 userdata
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/propertykeys/age?action=append
-
Request Body
{
+
1.2.2 为已存在的 PropertyKey 添加或移除 userdata
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/propertykeys/age?action=append
+
Request Body
{
"name": "age",
"user_data": {
"min": 0,
@@ -351,8 +351,8 @@
},
"task_id": 0
}
-
1.2.3 获取所有的 PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys
-
Response Status
200
+
1.2.3 获取所有的 PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys
+
Response Status
200
Response Body
{
"propertykeys": [
{
@@ -413,8 +413,8 @@
}
]
}
-
1.2.4 根据name获取PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
-
其中,age
为要获取的PropertyKey的名字
Response Status
200
+
1.2.4 根据name获取PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
+
其中,age
为要获取的 PropertyKey 的名称
Response Status
200
Response Body
{
"id": 1,
"name": "age",
@@ -430,13 +430,13 @@
"~create_time": "2022-05-13 13:47:23.745"
}
}
-
1.2.5 根据name删除PropertyKey
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
-
其中,age
为要获取的PropertyKey的名字
Response Status
202
+
1.2.5 根据 name 删除 PropertyKey
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
+
其中,age
为要删除的 PropertyKey 的名称
Response Status
202
Response Body
{
"task_id" : 0
}
-
3 - VertexLabel API
1.3 VertexLabel
假设已经创建好了1.1.3中列出来的 PropertyKeys
Params说明
- id:顶点类型id值
- name:顶点类型名称,必填
- id_strategy: 顶点类型的ID策略,主键ID、自动生成、自定义字符串、自定义数字、自定义UUID,默认主键ID
- properties: 顶点类型关联的属性类型
- primary_keys: 主键属性,当ID策略为PRIMARY_KEY时必须有值,其他ID策略时必须为空;
- enable_label_index: 是否开启类型索引,默认关闭
- index_names:顶点类型创建的索引,详情见3.4
- nullable_keys:可为空的属性
- user_data:设置顶点类型的通用信息,作用同属性类型
1.3.1 创建一个VertexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/vertexlabels
-
Request Body
{
+
3 - VertexLabel API
1.3 VertexLabel
假设已经创建好了1.1.3中列出来的 PropertyKeys
Params说明
- id:顶点类型id值
- name:顶点类型名称,必填
- id_strategy: 顶点类型的ID策略,主键ID、自动生成、自定义字符串、自定义数字、自定义UUID,默认主键ID
- properties: 顶点类型关联的属性类型
- primary_keys: 主键属性,当ID策略为PRIMARY_KEY时必须有值,其他ID策略时必须为空;
- enable_label_index: 是否开启类型索引,默认关闭
- index_names:顶点类型创建的索引,详情见3.4
- nullable_keys:可为空的属性
- user_data:设置顶点类型的通用信息,作用同属性类型
1.3.1 创建一个VertexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+
Request Body
{
"name": "person",
"id_strategy": "DEFAULT",
"properties": [
@@ -498,8 +498,8 @@
"ttl_start_time": "createdTime",
"enable_label_index": true
}
-
1.3.2 为已存在的VertexLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person?action=append
-
Request Body
{
+
1.3.2 为已存在的VertexLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person?action=append
+
Request Body
{
"name": "person",
"properties": [
"city"
@@ -532,8 +532,8 @@
"super": "animal"
}
}
-
1.3.3 获取所有的VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
-
Response Status
200
+
1.3.3 获取所有的VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+
Response Status
200
Response Body
{
"vertexlabels": [
{
@@ -580,8 +580,8 @@
}
]
}
-
1.3.4 根据name获取VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
-
Response Status
200
+
1.3.4 根据name获取VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
+
Response Status
200
Response Body
{
"id": 1,
"primary_keys": [
@@ -604,13 +604,13 @@
"super": "animal"
}
}
-
1.3.5 根据name删除VertexLabel
删除 VertexLabel 会导致删除对应的顶点以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
-
Response Status
202
+
1.3.5 根据name删除VertexLabel
删除 VertexLabel 会导致删除对应的顶点以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
4 - EdgeLabel API
1.4 EdgeLabel
假设已经创建好了1.2.3中的 PropertyKeys 和 1.3.3中的 VertexLabels
Params说明
- name:顶点类型名称,必填
- source_label: 源顶点类型的名称,必填
- target_label: 目标顶点类型的名称,必填
- frequency:两个点之间是否可以有多条边,可以取值SINGLE和MULTIPLE,非必填,默认值SINGLE
- properties: 边类型关联的属性类型,选填
- sort_keys: 当允许关联多次时,指定区分键属性列表
- nullable_keys:可为空的属性,选填,默认可为空
- enable_label_index: 是否开启类型索引,默认关闭
1.4.1 创建一个EdgeLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/edgelabels
-
Request Body
{
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
4 - EdgeLabel API
1.4 EdgeLabel
假设已经创建好了1.2.3中的 PropertyKeys 和 1.3.3中的 VertexLabels
Params说明
- name:顶点类型名称,必填
- source_label: 源顶点类型的名称,必填
- target_label: 目标顶点类型的名称,必填
- frequency:两个点之间是否可以有多条边,可以取值SINGLE和MULTIPLE,非必填,默认值SINGLE
- properties: 边类型关联的属性类型,选填
- sort_keys: 当允许关联多次时,指定区分键属性列表
- nullable_keys:可为空的属性,选填,默认可为空
- enable_label_index: 是否开启类型索引,默认关闭
1.4.1 创建一个EdgeLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/edgelabels
+
Request Body
{
"name": "created",
"source_label": "person",
"target_label": "software",
@@ -682,8 +682,8 @@
"ttl_start_time": "createdTime",
"user_data": {}
}
-
1.4.2 为已存在的EdgeLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/edgelabels/created?action=append
-
Request Body
{
+
1.4.2 为已存在的EdgeLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/edgelabels/created?action=append
+
Request Body
{
"name": "created",
"properties": [
"weight"
@@ -713,8 +713,8 @@
"enable_label_index": true,
"user_data": {}
}
-
1.4.3 获取所有的EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels
-
Response Status
200
+
1.4.3 获取所有的EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels
+
Response Status
200
Response Body
{
"edgelabels": [
{
@@ -758,8 +758,8 @@
}
]
}
-
1.4.4 根据name获取EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
-
Response Status
200
+
1.4.4 根据name获取EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
+
Response Status
200
Response Body
{
"id": 1,
"sort_keys": [
@@ -782,13 +782,13 @@
"enable_label_index": true,
"user_data": {}
}
-
1.4.5 根据name删除EdgeLabel
删除 EdgeLabel 会导致删除对应的边以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
-
Response Status
202
+
1.4.5 根据name删除EdgeLabel
删除 EdgeLabel 会导致删除对应的边以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
5 - IndexLabel API
1.5 IndexLabel
假设已经创建好了1.1.3中的 PropertyKeys 、1.2.3中的 VertexLabels 以及 1.3.3中的 EdgeLabels
1.5.1 创建一个IndexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/indexlabels
-
Request Body
{
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
5 - IndexLabel API
1.5 IndexLabel
假设已经创建好了1.1.3中的 PropertyKeys 、1.2.3中的 VertexLabels 以及 1.3.3中的 EdgeLabels
1.5.1 创建一个IndexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/indexlabels
+
Request Body
{
"name": "personByCity",
"base_type": "VERTEX_LABEL",
"base_value": "person",
@@ -811,8 +811,8 @@
},
"task_id": 2
}
-
1.5.2 获取所有的IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels
-
Response Status
200
+
1.5.2 获取所有的IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels
+
Response Status
200
Response Body
{
"indexlabels": [
{
@@ -858,8 +858,8 @@
}
]
}
-
1.5.3 根据name获取IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
-
Response Status
200
+
1.5.3 根据name获取IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
+
Response Status
200
Response Body
{
"id": 1,
"base_type": "VERTEX_LABEL",
@@ -870,28 +870,28 @@
],
"index_type": "SECONDARY"
}
-
1.5.4 根据name删除IndexLabel
删除 IndexLabel 会导致删除相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
-
Response Status
202
+
1.5.4 根据name删除IndexLabel
删除 IndexLabel 会导致删除相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
6 - Rebuild API
1.6 Rebuild
1.6.1 重建IndexLabel
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/indexlabels/personByCity
-
Response Status
202
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
6 - Rebuild API
1.6 Rebuild
1.6.1 重建IndexLabel
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/indexlabels/personByCity
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.2 VertexLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/vertexlabels/person
-
Response Status
202
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.2 VertexLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/vertexlabels/person
+
Response Status
202
Response Body
{
"task_id": 2
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2
(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.3 EdgeLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/edgelabels/created
-
Response Status
202
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2
(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.3 EdgeLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/edgelabels/created
+
Response Status
202
Response Body
{
"task_id": 3
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/3
(其中"3"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
7 - Vertex API
2.1 Vertex
顶点类型中的 Id 策略决定了顶点的 Id 类型,其对应关系如下:
Id_Strategy id type AUTOMATIC number PRIMARY_KEY string CUSTOMIZE_STRING string CUSTOMIZE_NUMBER number CUSTOMIZE_UUID uuid
顶点的 GET/PUT/DELETE
API 中 url 的 id 部分传入的应是带有类型信息的 id 值,这个类型信息用 json 串是否带引号表示,也就是说:
- 当 id 类型为 number 时,url 中的 id 不带引号,形如 xxx/vertices/123456
- 当 id 类型为 string 时,url 中的 id 带引号,形如 xxx/vertices/“123456”
接下来的示例均假设已经创建好了前述的各种 schema 信息
2.1.1 创建一个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices
-
Request Body
{
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/3
(其中"3"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
7 - Vertex API
2.1 Vertex
顶点类型中的 Id 策略决定了顶点的 Id 类型,其对应关系如下:
Id_Strategy id type AUTOMATIC number PRIMARY_KEY string CUSTOMIZE_STRING string CUSTOMIZE_NUMBER number CUSTOMIZE_UUID uuid
顶点的 GET/PUT/DELETE
API 中 url 的 id 部分传入的应是带有类型信息的 id 值,这个类型信息用 json 串是否带引号表示,也就是说:
- 当 id 类型为 number 时,url 中的 id 不带引号,形如 xxx/vertices/123456
- 当 id 类型为 string 时,url 中的 id 带引号,形如 xxx/vertices/“123456”
接下来的示例均假设已经创建好了前述的各种 schema 信息
2.1.1 创建一个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices
+
Request Body
{
"label": "person",
"properties": {
"name": "marko",
@@ -918,8 +918,8 @@
]
}
}
-
2.1.2 创建多个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices/batch
-
Request Body
[
+
2.1.2 创建多个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices/batch
+
Request Body
[
{
"label": "person",
"properties": {
@@ -941,8 +941,8 @@
"1:marko",
"2:ripple"
]
-
2.1.3 更新顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=append
-
Request Body
{
+
2.1.3 更新顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=append
+
Request Body
{
"label": "person",
"properties": {
"age": 30,
@@ -1004,8 +1004,8 @@
}
]
}
-
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/batch
-
Request Body
{
+
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/batch
+
Request Body
{
"vertices":[
{
"label":"software",
@@ -1069,8 +1069,8 @@
}
]
}
-
结果分析:
- lang 属性未指定更新策略,直接用新值覆盖旧值,无论新值是否为null;
- price 属性指定 BIGGER 的更新策略,旧属性值为328,新属性值为299,所以仍然保留了旧属性值328;
- age 属性指定 OVERRIDE 更新策略,而新属性值中未传入age,相当于age为null,所以仍然保留了原属性值32;
- city 属性也指定了 OVERRIDE 更新策略,且新属性值不为null,所以覆盖了旧值;
- weight 属性指定了 SUM 更新策略,旧属性值为0.1,新属性值为0.2,最后的值为0.3;
- hobby 属性(基数为Set)指定了 UNION 更新策略,所以新值与旧值取了并集;
其他的更新策略使用方式可以类推,不再赘述。
2.1.5 删除顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=eliminate
-
Request Body
{
+
结果分析:
- lang 属性未指定更新策略,直接用新值覆盖旧值,无论新值是否为null;
- price 属性指定 BIGGER 的更新策略,旧属性值为328,新属性值为299,所以仍然保留了旧属性值328;
- age 属性指定 OVERRIDE 更新策略,而新属性值中未传入age,相当于age为null,所以仍然保留了原属性值32;
- city 属性也指定了 OVERRIDE 更新策略,且新属性值不为null,所以覆盖了旧值;
- weight 属性指定了 SUM 更新策略,旧属性值为0.1,新属性值为0.2,最后的值为0.3;
- hobby 属性(基数为Set)指定了 UNION 更新策略,所以新值与旧值取了并集;
其他的更新策略使用方式可以类推,不再赘述。
2.1.5 删除顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=eliminate
+
Request Body
{
"label": "person",
"properties": {
"city": "Beijing"
@@ -1096,8 +1096,8 @@
]
}
}
-
2.1.6 获取符合条件的顶点
Params
- label: 顶点类型
- properties: 属性键值对(根据属性查询的前提是预先建立了索引)
- limit: 查询最大数目
- page: 页号
以上参数都是可选的,如果提供page参数,必须提供limit参数,不允许带其他参数。label, properties
和limit
可以任意组合。
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"age":29}
,范围匹配时形如properties={"age":"P.gt(29)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的顶点 P.neq(number) 属性值不等于number的顶点 P.lt(number) 属性值小于number的顶点 P.lte(number) 属性值小于等于number的顶点 P.gt(number) 属性值大于number的顶点 P.gte(number) 属性值大于等于number的顶点 P.between(number1,number2) 属性值大于等于number1且小于number2的顶点 P.inside(number1,number2) 属性值大于number1且小于number2的顶点 P.outside(number1,number2) 属性值小于number1且大于number2的顶点 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的顶点
查询所有 age 为 20 且 label 为 person 的顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?label=person&properties={"age":29}&limit=1
-
Response Status
200
+
2.1.6 获取符合条件的顶点
Params
- label: 顶点类型
- properties: 属性键值对(根据属性查询的前提是预先建立了索引)
- limit: 查询最大数目
- page: 页号
以上参数都是可选的,如果提供page参数,必须提供limit参数,不允许带其他参数。label, properties
和limit
可以任意组合。
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"age":29}
,范围匹配时形如properties={"age":"P.gt(29)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的顶点 P.neq(number) 属性值不等于number的顶点 P.lt(number) 属性值小于number的顶点 P.lte(number) 属性值小于等于number的顶点 P.gt(number) 属性值大于number的顶点 P.gte(number) 属性值大于等于number的顶点 P.between(number1,number2) 属性值大于等于number1且小于number2的顶点 P.inside(number1,number2) 属性值大于number1且小于number2的顶点 P.outside(number1,number2) 属性值小于number1且大于number2的顶点 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的顶点
查询所有 age 为 20 且 label 为 person 的顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?label=person&properties={"age":29}&limit=1
+
Response Status
200
Response Body
{
"vertices": [
{
@@ -1127,8 +1127,8 @@
}
]
}
-
分页查询所有顶点,获取第一页(page不带参数值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3
-
Response Status
200
+
分页查询所有顶点,获取第一页(page不带参数值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3
+
Response Status
200
Response Body
{
"vertices": [{
"id": "2:ripple",
@@ -1191,8 +1191,8 @@
"page": "001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004"
}
返回的body里面是带有下一页的页号信息的,"page": "001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004"
,
-在查询下一页的时候将该值赋给page参数。
分页查询所有顶点,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page=001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004&limit=3
-
Response Status
200
+在查询下一页的时候将该值赋给page参数。分页查询所有顶点,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page=001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004&limit=3
+
Response Status
200
Response Body
{
"vertices": [{
"id": "1:josh",
@@ -1254,8 +1254,8 @@
],
"page": null
}
-
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.1.7 根据Id获取顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
-
Response Status
200
+
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.1.7 根据Id获取顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
+
Response Status
200
Response Body
{
"id": "1:marko",
"label": "person",
@@ -1275,13 +1275,13 @@
]
}
}
-
2.1.8 根据Id删除顶点
Params
- label: 顶点类型,可选参数
仅根据Id删除顶点
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
-
Response Status
204
-
根据Label+Id删除顶点
通过指定Label参数和Id来删除顶点时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
-
Response Status
204
+
2.1.8 根据Id删除顶点
Params
- label: 顶点类型,可选参数
仅根据Id删除顶点
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
+
Response Status
204
+
根据Label+Id删除顶点
通过指定Label参数和Id来删除顶点时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
+
Response Status
204
8 - Edge API
2.2 Edge
顶点 id 格式的修改也影响到了边的 Id 以及源顶点和目标顶点 id 的格式。
EdgeId是由 src-vertex-id + direction + label + sort-values + tgt-vertex-id
拼接而成,
-但是这里的顶点id类型不是通过引号区分的,而是根据前缀区分:
- 当 id 类型为 number 时,EdgeId 的顶点 id 前有一个前缀
L
,形如 “L123456>1»L987654” - 当 id 类型为 string 时,EdgeId 的顶点 id 前有一个前缀
S
,形如 “S1:peter>1»S2:lop”
接下来的示例均假设已经创建好了前述的各种schema和vertex信息
2.2.1 创建一条边
Params说明
- label:边类型名称,必填
- outV:源顶点id,必填
- inV:目标顶点id,必填
- outVLabel:源顶点类型。必填
- inVLabel:目标顶点类型。必填
- properties: 边关联的属性,对象内部结构为:
- name:属性名称
- value:属性值
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges
-
Request Body
{
+但是这里的顶点id类型不是通过引号区分的,而是根据前缀区分:- 当 id 类型为 number 时,EdgeId 的顶点 id 前有一个前缀
L
,形如 “L123456>1»L987654” - 当 id 类型为 string 时,EdgeId 的顶点 id 前有一个前缀
S
,形如 “S1:peter>1»S2:lop”
接下来的示例均假设已经创建好了前述的各种schema和vertex信息
2.2.1 创建一条边
Params说明
- label:边类型名称,必填
- outV:源顶点id,必填
- inV:目标顶点id,必填
- outVLabel:源顶点类型。必填
- inVLabel:目标顶点类型。必填
- properties: 边关联的属性,对象内部结构为:
- name:属性名称
- value:属性值
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges
+
Request Body
{
"label": "created",
"outV": "1:peter",
"inV": "2:lop",
@@ -1306,8 +1306,8 @@
"weight": 0.2
}
}
-
2.2.2 创建多条边
Params
- check_vertex: 是否检查顶点存在(true | false),当设置为 true 而待插入边的源顶点或目标顶点不存在时会报错。
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges/batch
-
Request Body
[
+
2.2.2 创建多条边
Params
- check_vertex: 是否检查顶点存在(true | false),当设置为 true 而待插入边的源顶点或目标顶点不存在时会报错。
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges/batch
+
Request Body
[
{
"label": "created",
"outV": "1:peter",
@@ -1336,8 +1336,8 @@
"S1:peter>1>>S2:lop",
"S1:marko>2>>S1:vadas"
]
-
2.2.3 更新边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=append
-
Request Body
{
+
2.2.3 更新边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=append
+
Request Body
{
"properties": {
"weight": 1.0
}
@@ -1386,8 +1386,8 @@
}
]
}
-
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/edges/batch
-
Request Body
{
+
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/edges/batch
+
Request Body
{
"edges":[
{
"id":"S1:josh>2>>S2:ripple",
@@ -1452,8 +1452,8 @@
}
]
}
-
2.2.5 删除边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=eliminate
-
Request Body
{
+
2.2.5 删除边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=eliminate
+
Request Body
{
"properties": {
"weight": 1.0
}
@@ -1472,8 +1472,8 @@
}
}
2.2.6 获取符合条件的边
Params
- vertex_id: 顶点id
- direction: 边的方向(OUT | IN | BOTH)
- label: 边的标签
- properties: 属性键值对(根据属性查询的前提是预先建立了索引)
- offset:偏移,默认为0
- limit: 查询数目,默认为100
- page: 页号
支持的查询有以下几种:
- 提供vertex_id参数时,不可以使用参数page,direction、label、properties可选,offset和limit可以
-限制结果范围
- 不提供vertex_id参数时,label和properties可选
- 如果使用page参数,则:offset参数不可用(不填或者为0),direction不可用,properties最多只能有一个
- 如果不使用page参数,则:offset和limit可以用来限制结果范围,direction参数忽略
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"weight":0.8}
,范围匹配时形如properties={"age":"P.gt(0.8)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的边 P.neq(number) 属性值不等于number的边 P.lt(number) 属性值小于number的边 P.lte(number) 属性值小于等于number的边 P.gt(number) 属性值大于number的边 P.gte(number) 属性值大于等于number的边 P.between(number1,number2) 属性值大于等于number1且小于number2的边 P.inside(number1,number2) 属性值大于number1且小于number2的边 P.outside(number1,number2) 属性值小于number1且大于number2的边 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的边
查询与顶点 person:josh(vertex_id=“1:josh”) 相连且 label 为 created 的边
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?vertex_id="1:josh"&direction=BOTH&label=created&properties={}
-
Response Status
200
+限制结果范围- 不提供vertex_id参数时,label和properties可选
- 如果使用page参数,则:offset参数不可用(不填或者为0),direction不可用,properties最多只能有一个
- 如果不使用page参数,则:offset和limit可以用来限制结果范围,direction参数忽略
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"weight":0.8}
,范围匹配时形如properties={"age":"P.gt(0.8)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的边 P.neq(number) 属性值不等于number的边 P.lt(number) 属性值小于number的边 P.lte(number) 属性值小于等于number的边 P.gt(number) 属性值大于number的边 P.gte(number) 属性值大于等于number的边 P.between(number1,number2) 属性值大于等于number1且小于number2的边 P.inside(number1,number2) 属性值大于number1且小于number2的边 P.outside(number1,number2) 属性值小于number1且大于number2的边 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的边
查询与顶点 person:josh(vertex_id=“1:josh”) 相连且 label 为 created 的边
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?vertex_id="1:josh"&direction=BOTH&label=created&properties={}
+
Response Status
200
Response Body
{
"edges": [
{
@@ -1504,8 +1504,8 @@
}
]
}
-
分页查询所有边,获取第一页(page不带参数值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page&limit=3
-
Response Status
200
+
分页查询所有边,获取第一页(page不带参数值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page&limit=3
+
Response Status
200
Response Body
{
"edges": [{
"id": "S1:peter>2>>S2:lop",
@@ -1550,8 +1550,8 @@
"page": "002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004"
}
返回的body里面是带有下一页的页号信息的,"page": "002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004"
,
-在查询下一页的时候将该值赋给page参数。
分页查询所有边,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page=002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004&limit=3
-
Response Status
200
+在查询下一页的时候将该值赋给page参数。分页查询所有边,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page=002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004&limit=3
+
Response Status
200
Response Body
{
"edges": [{
"id": "S1:marko>1>20130220>S1:josh",
@@ -1595,8 +1595,8 @@
],
"page": null
}
-
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.2.7 根据Id获取边
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
-
Response Status
200
+
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.2.7 根据Id获取边
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
+
Response Status
200
Response Body
{
"id": "S1:peter>1>>S2:lop",
"label": "created",
@@ -1610,10 +1610,10 @@
"weight": 0.2
}
}
-
2.2.8 根据Id删除边
Params
- label: 边类型,可选参数
仅根据Id删除边
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
-
Response Status
204
-
根据Label+Id删除边
通过指定Label参数和Id来删除边时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
-
Response Status
204
+
2.2.8 根据Id删除边
Params
- label: 边类型,可选参数
仅根据Id删除边
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
+
Response Status
204
+
根据Label+Id删除边
通过指定Label参数和Id来删除边时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
+
Response Status
204
9 - Traverser API
3.1 traverser API概述
HugeGraphServer为HugeGraph图数据库提供了RESTful API接口。除了顶点和边的CRUD基本操作以外,还提供了一些遍历(traverser)方法,我们称为traverser API
。这些遍历方法实现了一些复杂的图算法,方便用户对图进行分析和挖掘。
HugeGraph支持的Traverser API包括:
- K-out API,根据起始顶点,查找恰好N步可达的邻居,分为基础版和高级版:
- 基础版使用GET方法,根据起始顶点,查找恰好N步可达的邻居
- 高级版使用POST方法,根据起始顶点,查找恰好N步可达的邻居,与基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
- K-neighbor API,根据起始顶点,查找N步以内可达的所有邻居,分为基础版和高级版:
- 基础版使用GET方法,根据起始顶点,查找N步以内可达的所有邻居
- 高级版使用POST方法,根据起始顶点,查找N步以内可达的所有邻居,与基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
- Same Neighbors, 查询两个顶点的共同邻居
- Jaccard Similarity API,计算jaccard相似度,包括两种:
- 一种是使用GET方法,计算两个顶点的邻居的相似度(交并比)
- 一种是使用POST方法,在全图中查找与起点的jaccard similarity最高的N个点
- Shortest Path API,查找两个顶点之间的最短路径
- All Shortest Paths,查找两个顶点间的全部最短路径
- Weighted Shortest Path,查找起点到目标点的带权最短路径
- Single Source Shortest Path,查找一个点到其他各个点的加权最短路径
- Multi Node Shortest Path,查找指定顶点集之间两两最短路径
- Paths API,查找两个顶点间的全部路径,分为基础版和高级版:
- 基础版使用GET方法,根据起点和终点,查找两个顶点间的全部路径
- 高级版使用POST方法,根据一组起点和一组终点,查找两个集合间符合条件的全部路径
- Customized Paths API,从一批顶点出发,按(一种)模式遍历经过的全部路径
- Template Path API,指定起点和终点以及起点和终点间路径信息,查找符合的路径
- Crosspoints API,查找两个顶点的交点(共同祖先或者共同子孙)
- Customized Crosspoints API,从一批顶点出发,按多种模式遍历,最后一步到达的顶点的交点
- Rings API,从起始顶点出发,可到达的环路路径
- Rays API,从起始顶点出发,可到达边界的路径(即无环路径)
- Fusiform Similarity API,查找一个顶点的梭形相似点
- Vertices API
- 按ID批量查询顶点;
- 获取顶点的分区;
- 按分区查询顶点;
- Edges API
- 按ID批量查询边;
- 获取边的分区;
- 按分区查询边;
3.2. traverser API详解
使用方法中的例子,都是基于TinkerPop官网给出的图:
数据导入程序如下:
public class Loader {
public static void main(String[] args) {
HugeClient client = new HugeClient("http://127.0.0.1:8080", "hugegraph");
@@ -1721,28 +1721,28 @@
peter.addEdge("created", lop, "date", "20170324", "weight", 0.2);
}
}
-
顶点ID为:
"2:ripple",
-"1:vadas",
-"1:peter",
-"1:josh",
-"1:marko",
-"2:lop"
-
边ID为:
"S1:peter>2>>S2:lop",
-"S1:josh>2>>S2:lop",
-"S1:josh>2>>S2:ripple",
-"S1:marko>1>20130220>S1:josh",
-"S1:marko>1>20160110>S1:vadas",
-"S1:marko>2>>S2:lop"
-
3.2.1 K-out API(GET,基础版)
3.2.1.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找从起始顶点出发恰好depth步可达的顶点
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.1.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kout?source="1:marko"&max_depth=2
-
Response Status
200
+
顶点ID为:
"2:ripple",
+"1:vadas",
+"1:peter",
+"1:josh",
+"1:marko",
+"2:lop"
+
边ID为:
"S1:peter>2>>S2:lop",
+"S1:josh>2>>S2:lop",
+"S1:josh>2>>S2:ripple",
+"S1:marko>1>20130220>S1:josh",
+"S1:marko>1>20160110>S1:vadas",
+"S1:marko>2>>S2:lop"
+
3.2.1 K-out API(GET,基础版)
3.2.1.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找从起始顶点出发恰好depth步可达的顶点
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.1.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kout?source="1:marko"&max_depth=2
+
Response Status
200
Response Body
{
"vertices":[
"2:ripple",
"1:peter"
]
}
-
3.2.1.3 适用场景
查找恰好N步关系可达的顶点。两个例子:
- 家族关系中,查找一个人的所有孙子,person A通过连续的两条“儿子”边到达的顶点集合。
- 社交关系中发现潜在好友,例如:与目标用户相隔两层朋友关系的用户,可以通过连续两条“朋友”边到达的顶点。
3.2.2 K-out API(POST,高级版)
3.2.2.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发恰好depth步可达的顶点。
与K-out基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kout
-
Request Body
{
+
3.2.1.3 适用场景
查找恰好N步关系可达的顶点。两个例子:
- 家族关系中,查找一个人的所有孙子,person A通过连续的两条“儿子”边到达的顶点集合。
- 社交关系中发现潜在好友,例如:与目标用户相隔两层朋友关系的用户,可以通过连续两条“朋友”边到达的顶点。
3.2.2 K-out API(POST,高级版)
3.2.2.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发恰好depth步可达的顶点。
与K-out基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kout
+
Request Body
{
"source": "1:marko",
"step": {
"direction": "BOTH",
@@ -1830,8 +1830,8 @@
}
]
}
-
3.2.2.3 适用场景
参见3.2.1.3
3.2.3 K-neighbor(GET,基础版)
3.2.3.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找包括起始顶点在内、depth步之内可达的所有顶点
相当于:起始顶点、K-out(1)、K-out(2)、… 、K-out(max_depth)的并集
Params
- source: 起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的顶点的最大数目,也即遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.3.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kneighbor?source=“1:marko”&max_depth=2
-
Response Status
200
+
3.2.2.3 适用场景
参见3.2.1.3
3.2.3 K-neighbor(GET,基础版)
3.2.3.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找包括起始顶点在内、depth步之内可达的所有顶点
相当于:起始顶点、K-out(1)、K-out(2)、… 、K-out(max_depth)的并集
Params
- source: 起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的顶点的最大数目,也即遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.3.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kneighbor?source=“1:marko”&max_depth=2
+
Response Status
200
Response Body
{
"vertices":[
"2:ripple",
@@ -1842,8 +1842,8 @@
"2:lop"
]
}
-
3.2.3.3 适用场景
查找N步以内可达的所有顶点,例如:
- 家族关系中,查找一个人五服以内所有子孙,person A通过连续的5条“亲子”边到达的顶点集合。
- 社交关系中发现好友圈子,例如目标用户通过1条、2条、3条“朋友”边可到达的用户可以组成目标用户的朋友圈子
3.2.4 K-neighbor API(POST,高级版)
3.2.4.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发depth步内可达的所有顶点。
与K-neighbor基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.4.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kneighbor
-
Request Body
{
+
3.2.3.3 适用场景
查找N步以内可达的所有顶点,例如:
- 家族关系中,查找一个人五服以内所有子孙,person A通过连续的5条“亲子”边到达的顶点集合。
- 社交关系中发现好友圈子,例如目标用户通过1条、2条、3条“朋友”边可到达的用户可以组成目标用户的朋友圈子
3.2.4 K-neighbor API(POST,高级版)
3.2.4.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发depth步内可达的所有顶点。
与K-neighbor基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.4.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kneighbor
+
Request Body
{
"source": "1:marko",
"step": {
"direction": "BOTH",
@@ -1972,20 +1972,20 @@
}
]
}
-
3.2.4.3 适用场景
参见3.2.3.3
3.2.5 Same Neighbors
3.2.5.1 功能介绍
查询两个点的共同邻居
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的共同邻居的最大数目,选填项,默认为10000000
3.2.5.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/sameneighbors?vertex=“1:marko”&other="1:josh"
-
Response Status
200
+
3.2.4.3 适用场景
参见3.2.3.3
3.2.5 Same Neighbors
3.2.5.1 功能介绍
查询两个点的共同邻居
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的共同邻居的最大数目,选填项,默认为10000000
3.2.5.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/sameneighbors?vertex=“1:marko”&other="1:josh"
+
Response Status
200
Response Body
{
"same_neighbors":[
"2:lop"
]
}
-
3.2.5.3 适用场景
查找两个顶点的共同邻居:
- 社交关系中发现两个用户的共同粉丝或者共同关注用户
3.2.6 Jaccard Similarity(GET)
3.2.6.1 功能介绍
计算两个顶点的jaccard similarity(两个顶点邻居的交集比上两个顶点邻居的并集)
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
3.2.6.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity?vertex="1:marko"&other="1:josh"
-
Response Status
200
+
3.2.5.3 适用场景
查找两个顶点的共同邻居:
- 社交关系中发现两个用户的共同粉丝或者共同关注用户
3.2.6 Jaccard Similarity(GET)
3.2.6.1 功能介绍
计算两个顶点的jaccard similarity(两个顶点邻居的交集比上两个顶点邻居的并集)
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
3.2.6.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity?vertex="1:marko"&other="1:josh"
+
Response Status
200
Response Body
{
"jaccard_similarity": 0.2
}
-
3.2.6.3 适用场景
用于评估两个点的相似性或者紧密度
3.2.7 Jaccard Similarity(POST)
3.2.7.1 功能介绍
计算与指定顶点的jaccard similarity最大的N个点
jaccard similarity的计算方式为:两个顶点邻居的交集比上两个顶点邻居的并集
Params
- vertex:一个顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- top:返回一个起点的jaccard similarity中最大的top个,选填项,默认为100
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.7.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity
-
Request Body
{
+
3.2.6.3 适用场景
用于评估两个点的相似性或者紧密度
3.2.7 Jaccard Similarity(POST)
3.2.7.1 功能介绍
计算与指定顶点的jaccard similarity最大的N个点
jaccard similarity的计算方式为:两个顶点邻居的交集比上两个顶点邻居的并集
Params
- vertex:一个顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- top:返回一个起点的jaccard similarity中最大的top个,选填项,默认为100
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.7.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity
+
Request Body
{
"vertex": "1:marko",
"step": {
"direction": "BOTH",
@@ -2001,8 +2001,8 @@
"1:peter": 0.3333333333333333,
"1:josh": 0.2
}
-
3.2.7.3 适用场景
用于在图中找出与指定顶点相似性最高的顶点
3.2.8 Shortest Path
3.2.8.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.8.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/shortestpath?source="1:marko"&target="2:ripple"&max_depth=3
-
Response Status
200
+
3.2.7.3 适用场景
用于在图中找出与指定顶点相似性最高的顶点
3.2.8 Shortest Path
3.2.8.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.8.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/shortestpath?source="1:marko"&target="2:ripple"&max_depth=3
+
Response Status
200
Response Body
{
"path":[
"1:marko",
@@ -2010,8 +2010,8 @@
"2:ripple"
]
}
-
3.2.8.3 适用场景
查找两个顶点间的最短路径,例如:
- 社交关系网中,查找两个用户有关系的最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备最短的关联关系
3.2.9 All Shortest Paths
3.2.9.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找两点间所有的最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.9.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/allshortestpaths?source="A"&target="Z"&max_depth=10
-
Response Status
200
+
3.2.8.3 适用场景
查找两个顶点间的最短路径,例如:
- 社交关系网中,查找两个用户有关系的最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备最短的关联关系
3.2.9 All Shortest Paths
3.2.9.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找两点间所有的最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.9.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/allshortestpaths?source="A"&target="Z"&max_depth=10
+
Response Status
200
Response Body
{
"paths":[
{
@@ -2032,8 +2032,8 @@
}
]
}
-
3.2.9.3 适用场景
查找两个顶点间的所有最短路径,例如:
- 社交关系网中,查找两个用户有关系的全部最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备全部的最短关联关系
3.2.10 Weighted Shortest Path
3.2.10.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条带权最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,必填项,必须是数字类型的属性
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.10.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/weightedshortestpath?source="1:marko"&target="2:ripple"&weight="weight"&with_vertex=true
-
Response Status
200
+
3.2.9.3 适用场景
查找两个顶点间的所有最短路径,例如:
- 社交关系网中,查找两个用户有关系的全部最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备全部的最短关联关系
3.2.10 Weighted Shortest Path
3.2.10.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条带权最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,必填项,必须是数字类型的属性
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.10.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/weightedshortestpath?source="1:marko"&target="2:ripple"&weight="weight"&with_vertex=true
+
Response Status
200
Response Body
{
"path": {
"weight": 2.0,
@@ -2076,8 +2076,8 @@
}
]
}
-
3.2.10.3 适用场景
查找两个顶点间的带权最短路径,例如:
- 交通线路中查找从A城市到B城市花钱最少的交通方式
3.2.11 Single Source Shortest Path
3.2.11.1 功能介绍
从一个顶点出发,查找该点到图中其他顶点的最短路径(可选是否带权重)
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,选填项,必须是数字类型的属性,如果不填或者虽然填了但是边没有该属性,则权重为1.0
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:查询到的目标顶点个数,也是返回的最短路径的条数,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.11.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/singlesourceshortestpath?source="1:marko"&with_vertex=true
-
Response Status
200
+
3.2.10.3 适用场景
查找两个顶点间的带权最短路径,例如:
- 交通线路中查找从A城市到B城市花钱最少的交通方式
3.2.11 Single Source Shortest Path
3.2.11.1 功能介绍
从一个顶点出发,查找该点到图中其他顶点的最短路径(可选是否带权重)
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,选填项,必须是数字类型的属性,如果不填或者虽然填了但是边没有该属性,则权重为1.0
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:查询到的目标顶点个数,也是返回的最短路径的条数,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.11.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/singlesourceshortestpath?source="1:marko"&with_vertex=true
+
Response Status
200
Response Body
{
"paths": {
"2:ripple": {
@@ -2181,8 +2181,8 @@
}
]
}
-
3.2.11.3 适用场景
查找从一个点出发到其他顶点的带权最短路径,比如:
- 查找从北京出发到全国其他所有城市的耗时最短的乘车方案
3.2.12 Multi Node Shortest Path
3.2.12.1 功能介绍
查找指定顶点集两两之间的最短路径
Params
- vertices:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.12.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/multinodeshortestpath
-
Request Body
{
+
3.2.11.3 适用场景
查找从一个点出发到其他顶点的带权最短路径,比如:
- 查找从北京出发到全国其他所有城市的耗时最短的乘车方案
3.2.12 Multi Node Shortest Path
3.2.12.1 功能介绍
查找指定顶点集两两之间的最短路径
Params
- vertices:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.12.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/multinodeshortestpath
+
Request Body
{
"vertices": {
"ids": ["382:marko", "382:josh", "382:vadas", "382:peter", "383:lop", "383:ripple"]
},
@@ -2364,8 +2364,8 @@
}
]
}
-
3.2.12.3 适用场景
查找多个点之间的最短路径,比如:
- 查找多个公司和法人之间的最短路径
3.2.13 Paths (GET,基础版)
3.2.13.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找所有路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
3.2.13.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/paths?source="1:marko"&target="1:josh"&max_depth=5
-
Response Status
200
+
3.2.12.3 适用场景
查找多个点之间的最短路径,比如:
- 查找多个公司和法人之间的最短路径
3.2.13 Paths (GET,基础版)
3.2.13.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找所有路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
3.2.13.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/paths?source="1:marko"&target="1:josh"&max_depth=5
+
Response Status
200
Response Body
{
"paths":[
{
@@ -2383,8 +2383,8 @@
}
]
}
-
3.2.13.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.14 Paths (POST,高级版)
3.2.14.1 功能介绍
根据起始顶点、目的顶点、步骤(step)和最大深度等条件查找所有路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.14.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/paths
-
Request Body
{
+
3.2.13.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.14 Paths (POST,高级版)
3.2.14.1 功能介绍
根据起始顶点、目的顶点、步骤(step)和最大深度等条件查找所有路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.14.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/paths
+
Request Body
{
"sources": {
"ids": ["1:marko"]
},
@@ -2422,8 +2422,8 @@
}
]
}
-
3.2.14.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.15 Customized Paths
3.2.15.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- weight_by:根据指定的属性计算边的权重,sort_by不为NONE时有效,与default_weight互斥
- default_weight:当边没有属性作为权重计算值时,采取的默认权重,sort_by不为NONE时有效,与weight_by互斥
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- sample:当需要对某个step的符合条件的边进行采样时设置,-1表示不采样,默认为采样100
- sort_by:根据路径的权重排序,选填项,默认为NONE:
- NONE表示不排序,默认值
- INCR表示按照路径权重的升序排序
- DECR表示按照路径权重的降序排序
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.15.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedpaths
-
Request Body
{
+
3.2.14.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.15 Customized Paths
3.2.15.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- weight_by:根据指定的属性计算边的权重,sort_by不为NONE时有效,与default_weight互斥
- default_weight:当边没有属性作为权重计算值时,采取的默认权重,sort_by不为NONE时有效,与weight_by互斥
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- sample:当需要对某个step的符合条件的边进行采样时设置,-1表示不采样,默认为采样100
- sort_by:根据路径的权重排序,选填项,默认为NONE:
- NONE表示不排序,默认值
- INCR表示按照路径权重的升序排序
- DECR表示按照路径权重的降序排序
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.15.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedpaths
+
Request Body
{
"sources":{
"ids":[
@@ -2550,8 +2550,8 @@
}
]
}
-
3.2.15.3 适用场景
适合查找各种复杂的路径集合,例如:
- 社交网络中,查找看过张艺谋所导演的电影的用户关注的大V的路径(张艺谋—>电影—->用户—>大V)
- 风控网络中,查找多个高风险用户的直系亲属的朋友的路径(高风险用户—>直系亲属—>朋友)
3.2.16 Template Paths
3.2.16.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_times:当前step可以重复的次数,当为N时,表示从起始顶点可以经过当前step 1-N 次
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- with_ring:Boolean值,true表示包含环路;false表示不包含环路,默认为false
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.16.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/templatepaths
-
Request Body
{
+
3.2.15.3 适用场景
适合查找各种复杂的路径集合,例如:
- 社交网络中,查找看过张艺谋所导演的电影的用户关注的大V的路径(张艺谋—>电影—->用户—>大V)
- 风控网络中,查找多个高风险用户的直系亲属的朋友的路径(高风险用户—>直系亲属—>朋友)
3.2.16 Template Paths
3.2.16.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_times:当前step可以重复的次数,当为N时,表示从起始顶点可以经过当前step 1-N 次
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- with_ring:Boolean值,true表示包含环路;false表示不包含环路,默认为false
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.16.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/templatepaths
+
Request Body
{
"sources": {
"ids": [],
"label": "person",
@@ -2670,8 +2670,8 @@
}
]
}
-
3.2.16.3 适用场景
适合查找各种复杂的模板路径,比如personA -(朋友)-> personB -(同学)-> personC,其中"朋友"和"同学"边可以分别是最多3层和4层的情况
3.2.17 Crosspoints
3.2.17.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找相交点
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点到目的顶点的方向, 目的点到起始点是反方向,BOTH时不考虑方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的交点的最大数目,选填项,默认为10
3.2.17.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/crosspoints?source="2:lop"&target="2:ripple"&max_depth=5&direction=IN
-
Response Status
200
+
3.2.16.3 适用场景
适合查找各种复杂的模板路径,比如personA -(朋友)-> personB -(同学)-> personC,其中"朋友"和"同学"边可以分别是最多3层和4层的情况
3.2.17 Crosspoints
3.2.17.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找相交点
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点到目的顶点的方向, 目的点到起始点是反方向,BOTH时不考虑方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的交点的最大数目,选填项,默认为10
3.2.17.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/crosspoints?source="2:lop"&target="2:ripple"&max_depth=5&direction=IN
+
Response Status
200
Response Body
{
"crosspoints":[
{
@@ -2684,8 +2684,8 @@
}
]
}
-
3.2.17.3 适用场景
查找两个顶点的交点及其路径,例如:
- 社交网络中,查找两个用户共同关注的话题或者大V
- 家族关系中,查找共同的祖先
3.2.18 Customized Crosspoints
3.2.18.1 功能介绍
根据一批起始顶点、多种边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径终点的交集
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
path_patterns:表示从起始顶点走过的路径规则,是一组规则的列表。必填项。每个规则是一个PathPattern
- 每个PathPattern是一组Step列表,每个Step结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的路径的最大数目,选填项,默认为10
with_path:true表示返回交点所在的路径,false表示不返回交点所在的路径,选填项,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有交点的完整信息
- false时表示只返回顶点id
3.2.18.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedcrosspoints
-
Request Body
{
+
3.2.17.3 适用场景
查找两个顶点的交点及其路径,例如:
- 社交网络中,查找两个用户共同关注的话题或者大V
- 家族关系中,查找共同的祖先
3.2.18 Customized Crosspoints
3.2.18.1 功能介绍
根据一批起始顶点、多种边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径终点的交集
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
path_patterns:表示从起始顶点走过的路径规则,是一组规则的列表。必填项。每个规则是一个PathPattern
- 每个PathPattern是一组Step列表,每个Step结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的路径的最大数目,选填项,默认为10
with_path:true表示返回交点所在的路径,false表示不返回交点所在的路径,选填项,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有交点的完整信息
- false时表示只返回顶点id
3.2.18.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedcrosspoints
+
Request Body
{
"sources":{
"ids":[
"2:lop",
@@ -2807,8 +2807,8 @@
}
]
}
-
3.2.18.3 适用场景
查询一组顶点通过多种路径在终点有交集的情况。例如:
- 在商品图谱中,多款手机、学习机、游戏机通过不同的低级别的类目路径,最终都属于一级类目的电子设备
3.2.19 Rings
3.2.19.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找可达的环路
例如:1 -> 25 -> 775 -> 14690 -> 25, 其中环路为 25 -> 775 -> 14690 -> 25
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- source_in_ring:环路是否包含起点,选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的可达环路的最大数目,选填项,默认为10
3.2.19.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rings?source="1:marko"&max_depth=2
-
Response Status
200
+
3.2.18.3 适用场景
查询一组顶点通过多种路径在终点有交集的情况。例如:
- 在商品图谱中,多款手机、学习机、游戏机通过不同的低级别的类目路径,最终都属于一级类目的电子设备
3.2.19 Rings
3.2.19.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找可达的环路
例如:1 -> 25 -> 775 -> 14690 -> 25, 其中环路为 25 -> 775 -> 14690 -> 25
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- source_in_ring:环路是否包含起点,选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的可达环路的最大数目,选填项,默认为10
3.2.19.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rings?source="1:marko"&max_depth=2
+
Response Status
200
Response Body
{
"rings":[
{
@@ -2834,8 +2834,8 @@
}
]
}
-
3.2.19.3 适用场景
查询起始顶点可达的环路,例如:
- 风控项目中,查询一个用户可达的循环担保的人或者设备
- 设备关联网络中,发现一个设备周围的循环引用的设备
3.2.20 Rays
3.2.20.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找发散到边界顶点的路径
例如:1 -> 25 -> 775 -> 14690 -> 2289 -> 18379, 其中 18379 为边界顶点,即没有从 18379 发出的边
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的非环路的最大数目,选填项,默认为10
3.2.20.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rays?source="1:marko"&max_depth=2&direction=OUT
-
Response Status
200
+
3.2.19.3 适用场景
查询起始顶点可达的环路,例如:
- 风控项目中,查询一个用户可达的循环担保的人或者设备
- 设备关联网络中,发现一个设备周围的循环引用的设备
3.2.20 Rays
3.2.20.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找发散到边界顶点的路径
例如:1 -> 25 -> 775 -> 14690 -> 2289 -> 18379, 其中 18379 为边界顶点,即没有从 18379 发出的边
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的非环路的最大数目,选填项,默认为10
3.2.20.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rays?source="1:marko"&max_depth=2&direction=OUT
+
Response Status
200
Response Body
{
"rays":[
{
@@ -2866,8 +2866,8 @@
}
]
}
-
3.2.20.3 适用场景
查找起始顶点到某种关系的边界顶点的路径,例如:
- 家族关系中,查找一个人到所有还没有孩子的子孙的路径
- 设备关联网络中,找到某个设备到终端设备的路径
3.2.21 Fusiform Similarity
3.2.21.1 功能介绍
按照条件查询一批顶点对应的"梭形相似点"。当两个顶点跟很多共同的顶点之间有某种关系的时候,我们认为这两个点为"梭形相似点"。举个例子说明"梭形相似点":“读者A"读了100本书,可以定义读过这100本书中的80本以上的读者,是"读者A"的"梭形相似点”
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
label:边的类型,选填项,默认代表所有edge label
direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
min_neighbors:最少邻居数目,邻居数目少于这个阈值时,认为起点不具备"梭形相似点"。比如想要找一个"读者A"读过的书的"梭形相似点",那么min_neighbors
为100时,表示"读者A"至少要读过100本书才可以有"梭形相似点",必填项
alpha:相似度,代表:起点与"梭形相似点"的共同邻居数目占起点的全部邻居数目的比例,必填项
min_similars:“梭形相似点"的最少个数,只有当起点的"梭形相似点"数目大于或等于该值时,才会返回起点及其"梭形相似点”,选填项,默认值为1
top:返回一个起点的"梭形相似点"中相似度最高的top个,必填项,0表示全部
group_property:与min_groups
一起使用,当起点跟其所有的"梭形相似点"某个属性的值有至少min_groups
个不同值时,才会返回该起点及其"梭形相似点"。比如为"读者A"推荐"异地"书友时,需要设置group_property
为读者的"城市"属性,min_group
至少为2,选填项,不填代表不需要根据属性过滤
min_groups:与group_property
一起使用,只有group_property
设置时才有意义
max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的结果数目上限(一个起点及其"梭形相似点"算一个结果),选填项,默认为10
with_intermediary:是否返回起点及其"梭形相似点"共同关联的中间点,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息
- false时表示只返回顶点id
3.2.21.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/fusiformsimilarity
-
Request Body
{
+
3.2.20.3 适用场景
查找起始顶点到某种关系的边界顶点的路径,例如:
- 家族关系中,查找一个人到所有还没有孩子的子孙的路径
- 设备关联网络中,找到某个设备到终端设备的路径
3.2.21 Fusiform Similarity
3.2.21.1 功能介绍
按照条件查询一批顶点对应的"梭形相似点"。当两个顶点跟很多共同的顶点之间有某种关系的时候,我们认为这两个点为"梭形相似点"。举个例子说明"梭形相似点":“读者A"读了100本书,可以定义读过这100本书中的80本以上的读者,是"读者A"的"梭形相似点”
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
label:边的类型,选填项,默认代表所有edge label
direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
min_neighbors:最少邻居数目,邻居数目少于这个阈值时,认为起点不具备"梭形相似点"。比如想要找一个"读者A"读过的书的"梭形相似点",那么min_neighbors
为100时,表示"读者A"至少要读过100本书才可以有"梭形相似点",必填项
alpha:相似度,代表:起点与"梭形相似点"的共同邻居数目占起点的全部邻居数目的比例,必填项
min_similars:“梭形相似点"的最少个数,只有当起点的"梭形相似点"数目大于或等于该值时,才会返回起点及其"梭形相似点”,选填项,默认值为1
top:返回一个起点的"梭形相似点"中相似度最高的top个,必填项,0表示全部
group_property:与min_groups
一起使用,当起点跟其所有的"梭形相似点"某个属性的值有至少min_groups
个不同值时,才会返回该起点及其"梭形相似点"。比如为"读者A"推荐"异地"书友时,需要设置group_property
为读者的"城市"属性,min_group
至少为2,选填项,不填代表不需要根据属性过滤
min_groups:与group_property
一起使用,只有group_property
设置时才有意义
max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的结果数目上限(一个起点及其"梭形相似点"算一个结果),选填项,默认为10
with_intermediary:是否返回起点及其"梭形相似点"共同关联的中间点,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息
- false时表示只返回顶点id
3.2.21.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/fusiformsimilarity
+
Request Body
{
"sources":{
"ids":[],
"label": "person",
@@ -2937,8 +2937,8 @@
}
]
}
-
3.2.21.3 适用场景
查询一组顶点相似度很高的顶点。例如:
- 跟一个读者有类似书单的读者
- 跟一个玩家玩类似游戏的玩家
3.2.22 Vertices
3.2.22.1 根据顶点的id列表,批量查询顶点
Params
- ids:要查询的顶点id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices?ids="1:marko"&ids="2:lop"
-
Response Status
200
+
3.2.21.3 适用场景
查询一组顶点相似度很高的顶点。例如:
- 跟一个读者有类似书单的读者
- 跟一个玩家玩类似游戏的玩家
3.2.22 Vertices
3.2.22.1 根据顶点的id列表,批量查询顶点
Params
- ids:要查询的顶点id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices?ids="1:marko"&ids="2:lop"
+
Response Status
200
Response Body
{
"vertices":[
{
@@ -2993,8 +2993,8 @@
}
]
}
-
3.2.22.2 获取顶点 Shard 信息
通过指定的分片大小split_size,获取顶点分片信息(可以与 3.2.21.3 中的 Scan 配合使用来获取顶点)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/shards?split_size=67108864
-
Response Status
200
+
3.2.22.2 获取顶点 Shard 信息
通过指定的分片大小split_size,获取顶点分片信息(可以与 3.2.21.3 中的 Scan 配合使用来获取顶点)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/shards?split_size=67108864
+
Response Status
200
Response Body
{
"shards":[
{
@@ -3020,8 +3020,8 @@
......
]
}
-
3.2.22.3 根据Shard信息批量获取顶点
通过指定的分片信息批量查询顶点(Shard信息的获取参见 3.2.21.2 Shard)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取顶点时,一页中顶点数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/scan?start=0&end=4294967295
-
Response Status
200
+
3.2.22.3 根据Shard信息批量获取顶点
通过指定的分片信息批量查询顶点(Shard信息的获取参见 3.2.21.2 Shard)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取顶点时,一页中顶点数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/scan?start=0&end=4294967295
+
Response Status
200
Response Body
{
"vertices":[
{
@@ -3176,8 +3176,8 @@
}
]
}
-
3.2.22.4 适用场景
- 按id列表查询顶点,可用于批量查询顶点,比如在path查询到多条路径之后,可以进一步查询某条路径的所有顶点属性。
- 获取分片和按分片查询顶点,可以用来遍历全部顶点
3.2.23 Edges
3.2.23.1 根据边的id列表,批量查询边
Params
- ids:要查询的边id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges?ids="S1:josh>1>>S2:lop"&ids="S1:josh>1>>S2:ripple"
-
Response Status
200
+
3.2.22.4 适用场景
- 按id列表查询顶点,可用于批量查询顶点,比如在path查询到多条路径之后,可以进一步查询某条路径的所有顶点属性。
- 获取分片和按分片查询顶点,可以用来遍历全部顶点
3.2.23 Edges
3.2.23.1 根据边的id列表,批量查询边
Params
- ids:要查询的边id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges?ids="S1:josh>1>>S2:lop"&ids="S1:josh>1>>S2:ripple"
+
Response Status
200
Response Body
{
"edges": [
{
@@ -3208,8 +3208,8 @@
}
]
}
-
3.2.23.2 获取边 Shard 信息
通过指定的分片大小split_size,获取边分片信息(可以与 3.2.22.3 中的 Scan 配合使用来获取边)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/shards?split_size=4294967295
-
Response Status
200
+
3.2.23.2 获取边 Shard 信息
通过指定的分片大小split_size,获取边分片信息(可以与 3.2.22.3 中的 Scan 配合使用来获取边)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/shards?split_size=4294967295
+
Response Status
200
Response Body
{
"shards":[
{
@@ -3239,8 +3239,8 @@
}
]
}
-
3.2.23.3 根据 Shard 信息批量获取边
通过指定的分片信息批量查询边(Shard信息的获取参见 3.2.22.2)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取边时,一页中边数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/scan?start=0&end=3221225469
-
Response Status
200
+
3.2.23.3 根据 Shard 信息批量获取边
通过指定的分片信息批量查询边(Shard信息的获取参见 3.2.22.2)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取边时,一页中边数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/scan?start=0&end=3221225469
+
Response Status
200
Response Body
{
"edges":[
{
@@ -3407,8 +3407,8 @@
}
]
}
-
注意将映射文件中input.path
的值修改为自己本地的路径。
4.2.1.1 功能介绍
适用于二分图,给出所有源顶点相关的其他顶点及其相关性组成的列表。
二分图:也称二部图,是图论里的一种特殊模型,也是一种特殊的网络流。其最大的特点在于,可以将图里的顶点分为两个集合,两个集合之间的点有边相连,但集合内的点之间没有直接关联。
假设有一个用户和物品的二分图,基于随机游走的 PersonalRank 算法步骤如下:
- 选定一个起点用户 u,其初始权重为 1.0,从 Vu 开始游走(有 alpha 的概率走到邻居点,1 - alpha 的概率停留);
- 如果决定向外游走, 那么会选取某一个类型的出边, 例如
rating
来查找共同的打分人:- 那就从当前节点的邻居节点中按照均匀分布随机选择一个,并且按照均匀分布划分权重值;
- 给源顶点补偿权重 1 - alpha;
- 重复步骤2;
- 达到一定步数或达到精度后收敛,得到推荐列表。
Params
必填项:
- source: 源顶点 id
- label: 源点出发的某类边 label,须连接两类不同顶点
选填项:
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,取值区间为 (0, 1], 默认值
0.85
- max_degree: 查询过程中,单个顶点遍历的最大邻接边数目,默认为
10000
- max_depth: 迭代次数,取值区间为 [2, 50], 默认值
5
- with_label:筛选结果中保留哪些结果,可选以下三类, 默认为
BOTH_LABEL
- SAME_LABEL:仅保留与源顶点相同类别的顶点
- OTHER_LABEL:仅保留与源顶点不同类别(二分图的另一端)的顶点
- BOTH_LABEL:同时保留与源顶点相同和相反类别的顶点
- limit: 返回的顶点的最大数目,默认为
100
- max_diff: 提前收敛的精度差, 默认为
0.0001
(后续实现) - sorted:返回的结果是否根据 rank 排序,为 true 时降序排列,反之不排序,默认为
true
4.2.1.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
-
Request Body
{
+
注意将映射文件中input.path
的值修改为自己本地的路径。
4.2.1.1 功能介绍
适用于二分图,给出所有源顶点相关的其他顶点及其相关性组成的列表。
二分图:也称二部图,是图论里的一种特殊模型,也是一种特殊的网络流。其最大的特点在于,可以将图里的顶点分为两个集合,两个集合之间的点有边相连,但集合内的点之间没有直接关联。
假设有一个用户和物品的二分图,基于随机游走的 PersonalRank 算法步骤如下:
- 选定一个起点用户 u,其初始权重为 1.0,从 Vu 开始游走(有 alpha 的概率走到邻居点,1 - alpha 的概率停留);
- 如果决定向外游走, 那么会选取某一个类型的出边, 例如
rating
来查找共同的打分人:- 那就从当前节点的邻居节点中按照均匀分布随机选择一个,并且按照均匀分布划分权重值;
- 给源顶点补偿权重 1 - alpha;
- 重复步骤2;
- 达到一定步数或达到精度后收敛,得到推荐列表。
Params
必填项:
- source: 源顶点 id
- label: 源点出发的某类边 label,须连接两类不同顶点
选填项:
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,取值区间为 (0, 1], 默认值
0.85
- max_degree: 查询过程中,单个顶点遍历的最大邻接边数目,默认为
10000
- max_depth: 迭代次数,取值区间为 [2, 50], 默认值
5
- with_label:筛选结果中保留哪些结果,可选以下三类, 默认为
BOTH_LABEL
- SAME_LABEL:仅保留与源顶点相同类别的顶点
- OTHER_LABEL:仅保留与源顶点不同类别(二分图的另一端)的顶点
- BOTH_LABEL:同时保留与源顶点相同和相反类别的顶点
- limit: 返回的顶点的最大数目,默认为
100
- max_diff: 提前收敛的精度差, 默认为
0.0001
(后续实现) - sorted:返回的结果是否根据 rank 排序,为 true 时降序排列,反之不排序,默认为
true
4.2.1.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
+
Request Body
{
"source": "1:1",
"label": "rating",
"alpha": 0.6,
@@ -3510,8 +3510,8 @@
}
}
4.2.2.1 功能介绍
在一般图结构中,找出每一层与给定起点相关性最高的前 N 个顶点及其相关度,用图的语义理解就是:从起点往外走,
-走到各层各个顶点的概率。
Params
- source: 源顶点 id,必填项
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,必填项,取值区间为 (0, 1]
- steps: 表示从起始顶点走过的路径规则,是一组 Step 的列表,每个 Step 对应结果中的一层,必填项。每个 Step 的结构如下:
- direction:表示边的方向(OUT, IN, BOTH),默认是 BOTH
- labels:边的类型列表,多个边类型取并集
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- top:在结果中每一层只保留权重最高的前 N 个结果,默认为 100,最大值为 1000
- capacity: 遍历过程中最大的访问的顶点数目,选填项,默认为10000000
4.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
-
Request Body
{
+走到各层各个顶点的概率。Params
- source: 源顶点 id,必填项
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,必填项,取值区间为 (0, 1]
- steps: 表示从起始顶点走过的路径规则,是一组 Step 的列表,每个 Step 对应结果中的一层,必填项。每个 Step 的结构如下:
- direction:表示边的方向(OUT, IN, BOTH),默认是 BOTH
- labels:边的类型列表,多个边类型取并集
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- top:在结果中每一层只保留权重最高的前 N 个结果,默认为 100,最大值为 1000
- capacity: 遍历过程中最大的访问的顶点数目,选填项,默认为10000000
4.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
+
Request Body
{
"source":"O",
"steps":[
{
@@ -3569,122 +3569,122 @@
}
]
}
-
4.2.2.3 适用场景
为给定的起点在不同的层中找到最应该推荐的顶点。
- 比如:在观众、朋友、电影、导演的四层图结构中,根据某个观众的朋友们喜欢的电影,为这个观众推荐电影;或者根据这些电影是谁拍的,为其推荐导演。
11 - Variable API
5.1 Variables
Variables可以用来存储有关整个图的数据,数据按照键值对的方式存取
5.1.1 创建或者更新某个键值对
Method & Url
PUT http://localhost:8080/graphs/hugegraph/variables/name
-
Request Body
{
+
4.2.2.3 适用场景
为给定的起点在不同的层中找到最应该推荐的顶点。
- 比如:在观众、朋友、电影、导演的四层图结构中,根据某个观众的朋友们喜欢的电影,为这个观众推荐电影;或者根据这些电影是谁拍的,为其推荐导演。
11 - Variable API
5.1 Variables
Variables可以用来存储有关整个图的数据,数据按照键值对的方式存取
5.1.1 创建或者更新某个键值对
Method & Url
PUT http://localhost:8080/graphs/hugegraph/variables/name
+
Request Body
{
"data": "tom"
}
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.2 列出全部键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables
-
Response Status
200
+
5.1.2 列出全部键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables
+
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.3 列出某个键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables/name
-
Response Status
200
+
5.1.3 列出某个键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables/name
+
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.4 删除某个键值对
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/variables/name
-
Response Status
204
-
12 - Graphs API
6.1 Graphs
6.1.1 列出数据库中全部的图
Method & Url
GET http://localhost:8080/graphs
-
Response Status
200
+
5.1.4 删除某个键值对
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/variables/name
+
Response Status
204
+
12 - Graphs API
6.1 Graphs
6.1.1 列出数据库中全部的图
Method & Url
GET http://localhost:8080/graphs
+
Response Status
200
Response Body
{
"graphs": [
"hugegraph",
"hugegraph1"
]
}
-
6.1.2 查看某个图的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph
-
Response Status
200
+
6.1.2 查看某个图的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph
+
Response Status
200
Response Body
{
"name": "hugegraph",
"backend": "cassandra"
}
-
6.1.3 清空某个图的全部数据,包括schema、vertex、edge和index等,该操作需要管理员权限
Params
由于清空图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to delete all data
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/clear?confirm_message=I%27m+sure+to+delete+all+data
-
Response Status
204
-
6.1.4 克隆一个图,该操作需要管理员权限
Params
- clone_graph_name: 已有图的名称;从已有的图来克隆,用户可选择传递配置文件,传递时将替换已有图中的配置;
Method & Url
POST http://localhost:8080/graphs/hugegraph_clone?clone_graph_name=hugegraph
-
Request Body 【可选】
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-backend=rocksdb
-serializer=binary
-store=hugegraph_clone
-rocksdb.data_path=./hg2
-rocksdb.wal_path=./hg2
-
Response Status
200
+
6.1.3 清空某个图的全部数据,包括schema、vertex、edge和index等,该操作需要管理员权限
Params
由于清空图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to delete all data
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/clear?confirm_message=I%27m+sure+to+delete+all+data
+
Response Status
204
+
6.1.4 克隆一个图,该操作需要管理员权限
Params
- clone_graph_name: 已有图的名称;从已有的图来克隆,用户可选择传递配置文件,传递时将替换已有图中的配置;
Method & Url
POST http://localhost:8080/graphs/hugegraph_clone?clone_graph_name=hugegraph
+
Request Body 【可选】
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+backend=rocksdb
+serializer=binary
+store=hugegraph_clone
+rocksdb.data_path=./hg2
+rocksdb.wal_path=./hg2
+
Response Status
200
Response Body
{
"name": "hugegraph_clone",
"backend": "rocksdb"
}
-
6.1.5 创建一个图,该操作需要管理员权限
Method & Url
POST http://localhost:8080/graphs/hugegraph2
-
Request Body
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-backend=rocksdb
-serializer=binary
-store=hugegraph2
-rocksdb.data_path=./hg2
-rocksdb.wal_path=./hg2
-
Response Status
200
+
6.1.5 创建一个图,该操作需要管理员权限
Method & Url
POST http://localhost:8080/graphs/hugegraph2
+
Request Body
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+backend=rocksdb
+serializer=binary
+store=hugegraph2
+rocksdb.data_path=./hg2
+rocksdb.wal_path=./hg2
+
Response Status
200
Response Body
{
"name": "hugegraph2",
"backend": "rocksdb"
}
-
6.1.6 删除某个图及其全部数据
Params
由于删除图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to drop the graph
Method & Url
DELETE http://localhost:8080/graphs/hugegraph_clone?confirm_message=I%27m%20sure%20to%20drop%20the%20graph
-
Response Status
204
-
6.2 Conf
6.2.1 查看某个图的配置,该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/hugegraph/conf
-
Response Status
200
-
Response Body
# gremlin entrence to create graph
-gremlin.graph=com.baidu.hugegraph.HugeFactory
-
-# cache config
-#schema.cache_capacity=1048576
-#graph.cache_capacity=10485760
-#graph.cache_expire=600
-
-# schema illegal name template
-#schema.illegal_name_regex=\s+|~.*
-
-#vertex.default_label=vertex
-
-backend=cassandra
-serializer=cassandra
-
-store=hugegraph
-...
-
6.3 Mode
合法的图模式包括:NONE,RESTORING,MERGING,LOADING
- None 模式(默认),元数据和图数据的写入属于正常状态。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- LOADING:批量导入数据时自动启用,特别的:
- 添加顶点/边时,不会检查必填属性是否传入
Restore 时存在两种不同的模式: Restoring 和 Merging
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
6.3.1 查看某个图的模式.
Method & Url
GET http://localhost:8080/graphs/hugegraph/mode
-
Response Status
200
+
6.1.6 删除某个图及其全部数据
Params
由于删除图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to drop the graph
Method & Url
DELETE http://localhost:8080/graphs/hugegraph_clone?confirm_message=I%27m%20sure%20to%20drop%20the%20graph
+
Response Status
204
+
6.2 Conf
6.2.1 查看某个图的配置,该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/hugegraph/conf
+
Response Status
200
+
Response Body
# gremlin entrence to create graph
+gremlin.graph=com.baidu.hugegraph.HugeFactory
+
+# cache config
+#schema.cache_capacity=1048576
+#graph.cache_capacity=10485760
+#graph.cache_expire=600
+
+# schema illegal name template
+#schema.illegal_name_regex=\s+|~.*
+
+#vertex.default_label=vertex
+
+backend=cassandra
+serializer=cassandra
+
+store=hugegraph
+...
+
6.3 Mode
合法的图模式包括:NONE,RESTORING,MERGING,LOADING
- None 模式(默认),元数据和图数据的写入属于正常状态。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- LOADING:批量导入数据时自动启用,特别的:
- 添加顶点/边时,不会检查必填属性是否传入
Restore 时存在两种不同的模式: Restoring 和 Merging
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
6.3.1 查看某个图的模式.
Method & Url
GET http://localhost:8080/graphs/hugegraph/mode
+
Response Status
200
Response Body
{
"mode": "NONE"
}
-
合法的图模式包括:NONE,RESTORING,MERGING
6.3.2 设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/hugegraph/mode
-
Request Body
"RESTORING"
-
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
+
合法的图模式包括:NONE,RESTORING,MERGING
6.3.2 设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/hugegraph/mode
+
Request Body
"RESTORING"
+
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
Response Body
{
"mode": "RESTORING"
}
-
6.3.3 查看某个图的读模式.
Params
- name: 图的名称
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph_read_mode
-
Response Status
200
+
6.3.3 查看某个图的读模式.
Params
- name: 图的名称
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph_read_mode
+
Response Status
200
Response Body
{
"graph_read_mode": "ALL"
}
-
6.3.4 设置某个图的读模式. 该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
-
Request Body
"OLTP_ONLY"
-
合法的图模式包括:ALL,OLTP_ONLY,OLAP_ONLY
Response Status
200
+
6.3.4 设置某个图的读模式. 该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
+
Request Body
"OLTP_ONLY"
+
合法的图模式包括:ALL,OLTP_ONLY,OLAP_ONLY
Response Status
200
Response Body
{
"graph_read_mode": "OLTP_ONLY"
}
-
6.4 Snapshot
6.4.1 创建快照
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_create
-
Response Status
200
+
6.4 Snapshot
6.4.1 创建快照
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_create
+
Response Status
200
Response Body
{
"hugegraph": "snapshot_created"
}
-
6.4.2 快照恢复
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_resume
-
Response Status
200
+
6.4.2 快照恢复
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_resume
+
Response Status
200
Response Body
{
"hugegraph": "snapshot_resumed"
}
-
6.5 Compact
6.5.1 手动压缩图,该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/compact
-
Response Status
200
+
6.5 Compact
6.5.1 手动压缩图,该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/compact
+
Response Status
200
Response Body
{
"nodes": 1,
"cluster_id": "local",
@@ -3692,8 +3692,8 @@
"local": "OK"
}
}
-
13 - Task API
7.1 Task
7.1.1 列出某个图中全部的异步任务
Params
- status: 异步任务的状态
- limit:返回异步任务数目上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks?status=success
-
Response Status
200
+
13 - Task API
7.1 Task
7.1.1 列出某个图中全部的异步任务
Params
- status: 异步任务的状态
- limit:返回异步任务数目上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks?status=success
+
Response Status
200
Response Body
{
"tasks": [{
"task_name": "hugegraph.traversal().V()",
@@ -3709,8 +3709,8 @@
"task_input": "{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}]
}
-
7.1.2 查看某个异步任务的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks/2
-
Response Status
200
+
7.1.2 查看某个异步任务的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks/2
+
Response Status
200
Response Body
{
"task_name": "hugegraph.traversal().V()",
"task_progress": 0,
@@ -3724,8 +3724,8 @@
"task_callable": "com.baidu.hugegraph.api.job.GremlinAPI$GremlinJob",
"task_input": "{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}
-
7.1.3 删除某个异步任务信息,不删除异步任务本身
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/tasks/2
-
Response Status
204
+
7.1.3 删除某个异步任务信息,不删除异步任务本身
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/tasks/2
+
Response Status
204
7.1.4 取消某个异步任务,该异步任务必须具有处理中断的能力
假设已经通过Gremlin API创建了一个异步任务如下:
"for (int i = 0; i < 10; i++) {" +
"hugegraph.addVertex(T.label, 'man');" +
"hugegraph.tx().commit();" +
@@ -3735,13 +3735,13 @@
"break;" +
"}" +
"}"
-
Method & Url
PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
-
请保证在10秒内发送该请求,如果超过10秒发送,任务可能已经执行完成,无法取消。
Response Status
202
+
Method & Url
PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
+
请保证在10秒内发送该请求,如果超过10秒发送,任务可能已经执行完成,无法取消。
Response Status
202
Response Body
{
"cancelled": true
}
-
此时查询 label 为 man 的顶点数目,一定是小于 10 的。
14 - Gremlin API
8.1 Gremlin
8.1.1 向HugeGraphServer发送gremlin语句(GET),同步执行
Params
- gremlin: 要发送给
HugeGraphServer
执行的gremlin
语句 - bindings: 用来绑定参数,key是字符串,value是绑定的值(只能是字符串或者数字),功能类似于MySQL的 Prepared Statement,用于加速语句执行
- language: 发送语句的语言类型,默认为
gremlin-groovy
- aliases: 为存在于图空间的已有变量添加别名
查询顶点
Method & Url
GET http://127.0.0.1:8080/gremlin?gremlin=hugegraph.traversal().V('1:marko')
-
Response Status
200
+
此时查询 label 为 man 的顶点数目,一定是小于 10 的。
14 - Gremlin API
8.1 Gremlin
8.1.1 向HugeGraphServer发送gremlin语句(GET),同步执行
Params
- gremlin: 要发送给
HugeGraphServer
执行的gremlin
语句 - bindings: 用来绑定参数,key是字符串,value是绑定的值(只能是字符串或者数字),功能类似于MySQL的 Prepared Statement,用于加速语句执行
- language: 发送语句的语言类型,默认为
gremlin-groovy
- aliases: 为存在于图空间的已有变量添加别名
查询顶点
Method & Url
GET http://127.0.0.1:8080/gremlin?gremlin=hugegraph.traversal().V('1:marko')
+
Response Status
200
Response Body
{
"requestId": "c6ef47a8-b634-4b07-9d38-6b3b69a3a556",
"status": {
@@ -3772,8 +3772,8 @@
"meta": {}
}
}
-
8.1.2 向HugeGraphServer发送gremlin语句(POST),同步执行
Method & Url
POST http://localhost:8080/gremlin
-
查询顶点
Request Body
{
+
8.1.2 向HugeGraphServer发送gremlin语句(POST),同步执行
Method & Url
POST http://localhost:8080/gremlin
+
查询顶点
Request Body
{
"gremlin": "hugegraph.traversal().V('1:marko')",
"bindings": {},
"language": "gremlin-groovy",
@@ -3847,8 +3847,8 @@
"meta": {}
}
}
-
8.1.3 向HugeGraphServer发送gremlin语句(POST),异步执行
Method & Url
POST http://localhost:8080/graphs/hugegraph/jobs/gremlin
-
查询顶点
Request Body
{
+
8.1.3 向HugeGraphServer发送gremlin语句(POST),异步执行
Method & Url
POST http://localhost:8080/graphs/hugegraph/jobs/gremlin
+
查询顶点
Request Body
{
"gremlin": "g.V('1:marko')",
"bindings": {},
"language": "gremlin-groovy",
@@ -3880,8 +3880,8 @@
"user_phone": "182****9088",
"user_email": "123@xx.com"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/users
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/users
+
Response Status
201
Response Body
返回报文中,密码为加密后的密文
{
"user_password": "******",
"user_email": "123@xx.com",
@@ -3892,11 +3892,11 @@
"id": "-63:boss",
"user_create": "2020-11-17 14:31:07.833"
}
-
9.2.2 删除用户
Params
- id: 需要删除的用户 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/users/-63:test
-
Response Status
204
+
9.2.2 删除用户
Params
- id: 需要删除的用户 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/users/-63:test
+
Response Status
204
Response Body
1
-
9.2.3 修改用户
Params
- id: 需要修改的用户 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test
-
Request Body
修改user_name、user_password和user_phone
{
+
9.2.3 修改用户
Params
- id: 需要修改的用户 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test
+
Request Body
修改user_name、user_password和user_phone
{
"user_name": "test",
"user_password": "******",
"user_phone": "183****9266"
@@ -3911,8 +3911,8 @@
"id": "-63:test",
"user_create": "2020-11-12 10:27:13.601"
}
-
9.2.4 查询用户列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users
-
Response Status
200
+
9.2.4 查询用户列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users
+
Response Status
200
Response Body
{
"users": [
{
@@ -3925,8 +3925,8 @@
}
]
}
-
9.2.5 查询某个用户
Params
- id: 需要查询的用户 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:admin
-
Response Status
200
+
9.2.5 查询某个用户
Params
- id: 需要查询的用户 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:admin
+
Response Status
200
Response Body
{
"users": [
{
@@ -3939,8 +3939,8 @@
}
]
}
-
9.2.6 查询某个用户的角色
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:boss/role
-
Response Status
200
+
9.2.6 查询某个用户的角色
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:boss/role
+
Response Status
200
Response Body
{
"roles": {
"hugegraph": {
@@ -3958,8 +3958,8 @@
"group_name": "all",
"group_description": "group can do anything"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/groups
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/groups
+
Response Status
201
Response Body
{
"group_creator": "admin",
"group_name": "all",
@@ -3968,11 +3968,11 @@
"id": "-69:all",
"group_description": "group can do anything"
}
-
9.3.2 删除用户组
Params
- id: 需要删除的用户组 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
-
Response Status
204
+
9.3.2 删除用户组
Params
- id: 需要删除的用户组 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
+
Response Status
204
Response Body
1
-
9.3.3 修改用户组
Params
- id: 需要修改的用户组 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
-
Request Body
修改group_description
{
+
9.3.3 修改用户组
Params
- id: 需要修改的用户组 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
+
Request Body
修改group_description
{
"group_name": "grant",
"group_description": "grant"
}
@@ -3985,8 +3985,8 @@
"id": "-69:grant",
"group_description": "grant"
}
-
9.3.4 查询用户组列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups
-
Response Status
200
+
9.3.4 查询用户组列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups
+
Response Status
200
Response Body
{
"groups": [
{
@@ -3999,8 +3999,8 @@
}
]
}
-
9.3.5 查询某个用户组
Params
- id: 需要查询的用户组 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all
-
Response Status
200
+
9.3.5 查询某个用户组
Params
- id: 需要查询的用户组 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all
+
Response Status
200
Response Body
{
"group_creator": "admin",
"group_name": "all",
@@ -4021,8 +4021,8 @@
}
]
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/targets
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/targets
+
Response Status
201
Response Body
{
"target_creator": "admin",
"target_name": "all",
@@ -4039,11 +4039,11 @@
"id": "-77:all",
"target_update": "2020-11-11 15:32:01.192"
}
-
9.4.2 删除资源
Params
- id: 需要删除的资源 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
-
Response Status
204
+
9.4.2 删除资源
Params
- id: 需要删除的资源 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
+
Response Status
204
Response Body
1
-
9.4.3 修改资源
Params
- id: 需要修改的资源 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
-
Request Body
修改资源定义中的type
{
+
9.4.3 修改资源
Params
- id: 需要修改的资源 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
+
Request Body
修改资源定义中的type
{
"target_name": "gremlin",
"target_graph": "hugegraph",
"target_url": "127.0.0.1:8080",
@@ -4070,8 +4070,8 @@
"id": "-77:gremlin",
"target_update": "2020-11-12 09:37:12.780"
}
-
9.4.4 查询资源列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets
-
Response Status
200
+
9.4.4 查询资源列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets
+
Response Status
200
Response Body
{
"targets": [
{
@@ -4108,8 +4108,8 @@
}
]
}
-
9.4.5 查询某个资源
Params
- id: 需要查询的资源 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets/-77:grant
-
Response Status
200
+
9.4.5 查询某个资源
Params
- id: 需要查询的资源 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets/-77:grant
+
Response Status
200
Response Body
{
"target_creator": "admin",
"target_name": "grant",
@@ -4130,8 +4130,8 @@
"user": "-63:boss",
"group": "-69:all"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/belongs
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/belongs
+
Response Status
201
Response Body
{
"belong_create": "2020-11-11 16:19:35.422",
"belong_creator": "admin",
@@ -4140,11 +4140,11 @@
"user": "-63:boss",
"group": "-69:all"
}
-
9.5.2 删除关联角色
Params
- id: 需要删除的关联角色 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
-
Response Status
204
+
9.5.2 删除关联角色
Params
- id: 需要删除的关联角色 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
+
Response Status
204
Response Body
1
-
9.5.3 修改关联角色
关联角色只能修改描述,不能修改 user 和 group 属性,如果需要修改关联角色,需要删除原来关联关系,新增关联角色。
Params
- id: 需要修改的关联角色 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
-
Request Body
修改belong_description
{
+
9.5.3 修改关联角色
关联角色只能修改描述,不能修改 user 和 group 属性,如果需要修改关联角色,需要删除原来关联关系,新增关联角色。
Params
- id: 需要修改的关联角色 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
+
Request Body
修改belong_description
{
"belong_description": "update test"
}
Response Status
200
@@ -4157,8 +4157,8 @@
"user": "-63:boss",
"group": "-69:grant"
}
-
9.5.4 查询关联角色列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs
-
Response Status
200
+
9.5.4 查询关联角色列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs
+
Response Status
200
Response Body
{
"belongs": [
{
@@ -4171,8 +4171,8 @@
}
]
}
-
9.5.5 查看某个关联角色
Params
- id: 需要查询的关联角色 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all
-
Response Status
200
+
9.5.5 查看某个关联角色
Params
- id: 需要查询的关联角色 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all
+
Response Status
200
Response Body
{
"belong_create": "2020-11-11 16:19:35.422",
"belong_creator": "admin",
@@ -4186,8 +4186,8 @@
"target": "-77:all",
"access_permission": "READ"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/accesses
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/accesses
+
Response Status
201
Response Body
{
"access_permission": "READ",
"access_create": "2020-11-11 15:54:54.008",
@@ -4197,11 +4197,11 @@
"group": "-69:all",
"target": "-77:all"
}
-
9.6.2 删除赋权
Params
- id: 需要删除的赋权 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
-
Response Status
204
+
9.6.2 删除赋权
Params
- id: 需要删除的赋权 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
+
Response Status
204
Response Body
1
-
9.6.3 修改赋权
赋权只能修改描述,不能修改用户组、资源和权限许可,如果需要修改赋权的关系,可以删除原来的赋权关系,新增赋权。
Params
- id: 需要修改的赋权 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
-
Request Body
修改access_description
{
+
9.6.3 修改赋权
赋权只能修改描述,不能修改用户组、资源和权限许可,如果需要修改赋权的关系,可以删除原来的赋权关系,新增赋权。
Params
- id: 需要修改的赋权 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
+
Request Body
修改access_description
{
"access_description": "test"
}
Response Status
200
@@ -4215,8 +4215,8 @@
"group": "-69:all",
"target": "-77:all"
}
-
9.6.4 查询赋权列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses
-
Response Status
200
+
9.6.4 查询赋权列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses
+
Response Status
200
Response Body
{
"accesses": [
{
@@ -4230,8 +4230,8 @@
}
]
}
-
9.6.5 查询某个赋权
Params
- id: 需要查询的赋权 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>11>S-77:all
-
Response Status
200
+
9.6.5 查询某个赋权
Params
- id: 需要查询的赋权 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>11>S-77:all
+
Response Status
200
Response Body
{
"access_permission": "READ",
"access_create": "2020-11-11 15:54:54.008",
@@ -4241,8 +4241,8 @@
"group": "-69:all",
"target": "-77:all"
}
-
16 - Other API
10.1 Other
10.1.1 查看HugeGraph的版本信息
Method & Url
GET http://localhost:8080/versions
-
Response Status
200
+
16 - Other API
10.1 Other
10.1.1 查看HugeGraph的版本信息
Method & Url
GET http://localhost:8080/versions
+
Response Status
200
Response Body
{
"versions": {
"version": "v1",
diff --git a/cn/docs/clients/restful-api/auth/index.html b/cn/docs/clients/restful-api/auth/index.html
index f0a7a6180..eea3275dc 100644
--- a/cn/docs/clients/restful-api/auth/index.html
+++ b/cn/docs/clients/restful-api/auth/index.html
@@ -45,8 +45,8 @@
"user_phone": "182****9088",
"user_email": "123@xx.com"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/users
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/users
+
Response Status
201
Response Body
返回报文中,密码为加密后的密文
{
"user_password": "******",
"user_email": "123@xx.com",
@@ -57,11 +57,11 @@
"id": "-63:boss",
"user_create": "2020-11-17 14:31:07.833"
}
-
9.2.2 删除用户
Params
- id: 需要删除的用户 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/users/-63:test
-
Response Status
204
+
9.2.2 删除用户
Params
- id: 需要删除的用户 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/users/-63:test
+
Response Status
204
Response Body
1
-
9.2.3 修改用户
Params
- id: 需要修改的用户 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test
-
Request Body
修改user_name、user_password和user_phone
{
+
9.2.3 修改用户
Params
- id: 需要修改的用户 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test
+
Request Body
修改user_name、user_password和user_phone
{
"user_name": "test",
"user_password": "******",
"user_phone": "183****9266"
@@ -76,8 +76,8 @@
"id": "-63:test",
"user_create": "2020-11-12 10:27:13.601"
}
-
9.2.4 查询用户列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users
-
Response Status
200
+
9.2.4 查询用户列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users
+
Response Status
200
Response Body
{
"users": [
{
@@ -90,8 +90,8 @@
}
]
}
-
9.2.5 查询某个用户
Params
- id: 需要查询的用户 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:admin
-
Response Status
200
+
9.2.5 查询某个用户
Params
- id: 需要查询的用户 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:admin
+
Response Status
200
Response Body
{
"users": [
{
@@ -104,8 +104,8 @@
}
]
}
-
9.2.6 查询某个用户的角色
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:boss/role
-
Response Status
200
+
9.2.6 查询某个用户的角色
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:boss/role
+
Response Status
200
Response Body
{
"roles": {
"hugegraph": {
@@ -123,8 +123,8 @@
"group_name": "all",
"group_description": "group can do anything"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/groups
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/groups
+
Response Status
201
Response Body
{
"group_creator": "admin",
"group_name": "all",
@@ -133,11 +133,11 @@
"id": "-69:all",
"group_description": "group can do anything"
}
-
9.3.2 删除用户组
Params
- id: 需要删除的用户组 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
-
Response Status
204
+
9.3.2 删除用户组
Params
- id: 需要删除的用户组 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
+
Response Status
204
Response Body
1
-
9.3.3 修改用户组
Params
- id: 需要修改的用户组 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
-
Request Body
修改group_description
{
+
9.3.3 修改用户组
Params
- id: 需要修改的用户组 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
+
Request Body
修改group_description
{
"group_name": "grant",
"group_description": "grant"
}
@@ -150,8 +150,8 @@
"id": "-69:grant",
"group_description": "grant"
}
-
9.3.4 查询用户组列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups
-
Response Status
200
+
9.3.4 查询用户组列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups
+
Response Status
200
Response Body
{
"groups": [
{
@@ -164,8 +164,8 @@
}
]
}
-
9.3.5 查询某个用户组
Params
- id: 需要查询的用户组 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all
-
Response Status
200
+
9.3.5 查询某个用户组
Params
- id: 需要查询的用户组 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all
+
Response Status
200
Response Body
{
"group_creator": "admin",
"group_name": "all",
@@ -186,8 +186,8 @@
}
]
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/targets
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/targets
+
Response Status
201
Response Body
{
"target_creator": "admin",
"target_name": "all",
@@ -204,11 +204,11 @@
"id": "-77:all",
"target_update": "2020-11-11 15:32:01.192"
}
-
9.4.2 删除资源
Params
- id: 需要删除的资源 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
-
Response Status
204
+
9.4.2 删除资源
Params
- id: 需要删除的资源 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
+
Response Status
204
Response Body
1
-
9.4.3 修改资源
Params
- id: 需要修改的资源 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
-
Request Body
修改资源定义中的type
{
+
9.4.3 修改资源
Params
- id: 需要修改的资源 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
+
Request Body
修改资源定义中的type
{
"target_name": "gremlin",
"target_graph": "hugegraph",
"target_url": "127.0.0.1:8080",
@@ -235,8 +235,8 @@
"id": "-77:gremlin",
"target_update": "2020-11-12 09:37:12.780"
}
-
9.4.4 查询资源列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets
-
Response Status
200
+
9.4.4 查询资源列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets
+
Response Status
200
Response Body
{
"targets": [
{
@@ -273,8 +273,8 @@
}
]
}
-
9.4.5 查询某个资源
Params
- id: 需要查询的资源 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets/-77:grant
-
Response Status
200
+
9.4.5 查询某个资源
Params
- id: 需要查询的资源 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets/-77:grant
+
Response Status
200
Response Body
{
"target_creator": "admin",
"target_name": "grant",
@@ -295,8 +295,8 @@
"user": "-63:boss",
"group": "-69:all"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/belongs
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/belongs
+
Response Status
201
Response Body
{
"belong_create": "2020-11-11 16:19:35.422",
"belong_creator": "admin",
@@ -305,11 +305,11 @@
"user": "-63:boss",
"group": "-69:all"
}
-
9.5.2 删除关联角色
Params
- id: 需要删除的关联角色 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
-
Response Status
204
+
9.5.2 删除关联角色
Params
- id: 需要删除的关联角色 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
+
Response Status
204
Response Body
1
-
9.5.3 修改关联角色
关联角色只能修改描述,不能修改 user 和 group 属性,如果需要修改关联角色,需要删除原来关联关系,新增关联角色。
Params
- id: 需要修改的关联角色 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
-
Request Body
修改belong_description
{
+
9.5.3 修改关联角色
关联角色只能修改描述,不能修改 user 和 group 属性,如果需要修改关联角色,需要删除原来关联关系,新增关联角色。
Params
- id: 需要修改的关联角色 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
+
Request Body
修改belong_description
{
"belong_description": "update test"
}
Response Status
200
@@ -322,8 +322,8 @@
"user": "-63:boss",
"group": "-69:grant"
}
-
9.5.4 查询关联角色列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs
-
Response Status
200
+
9.5.4 查询关联角色列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs
+
Response Status
200
Response Body
{
"belongs": [
{
@@ -336,8 +336,8 @@
}
]
}
-
9.5.5 查看某个关联角色
Params
- id: 需要查询的关联角色 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all
-
Response Status
200
+
9.5.5 查看某个关联角色
Params
- id: 需要查询的关联角色 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all
+
Response Status
200
Response Body
{
"belong_create": "2020-11-11 16:19:35.422",
"belong_creator": "admin",
@@ -351,8 +351,8 @@
"target": "-77:all",
"access_permission": "READ"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/accesses
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/accesses
+
Response Status
201
Response Body
{
"access_permission": "READ",
"access_create": "2020-11-11 15:54:54.008",
@@ -362,11 +362,11 @@
"group": "-69:all",
"target": "-77:all"
}
-
9.6.2 删除赋权
Params
- id: 需要删除的赋权 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
-
Response Status
204
+
9.6.2 删除赋权
Params
- id: 需要删除的赋权 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
+
Response Status
204
Response Body
1
-
9.6.3 修改赋权
赋权只能修改描述,不能修改用户组、资源和权限许可,如果需要修改赋权的关系,可以删除原来的赋权关系,新增赋权。
Params
- id: 需要修改的赋权 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
-
Request Body
修改access_description
{
+
9.6.3 修改赋权
赋权只能修改描述,不能修改用户组、资源和权限许可,如果需要修改赋权的关系,可以删除原来的赋权关系,新增赋权。
Params
- id: 需要修改的赋权 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
+
Request Body
修改access_description
{
"access_description": "test"
}
Response Status
200
@@ -380,8 +380,8 @@
"group": "-69:all",
"target": "-77:all"
}
-
9.6.4 查询赋权列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses
-
Response Status
200
+
9.6.4 查询赋权列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses
+
Response Status
200
Response Body
{
"accesses": [
{
@@ -395,8 +395,8 @@
}
]
}
-
9.6.5 查询某个赋权
Params
- id: 需要查询的赋权 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>11>S-77:all
-
Response Status
200
+
9.6.5 查询某个赋权
Params
- id: 需要查询的赋权 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>11>S-77:all
+
Response Status
200
Response Body
{
"access_permission": "READ",
"access_create": "2020-11-11 15:54:54.008",
diff --git a/cn/docs/clients/restful-api/edge/index.html b/cn/docs/clients/restful-api/edge/index.html
index db55459ac..c21daaca7 100644
--- a/cn/docs/clients/restful-api/edge/index.html
+++ b/cn/docs/clients/restful-api/edge/index.html
@@ -20,8 +20,8 @@
Create documentation issue
Create project issue
Print entire section
Edge API
2.2 Edge
顶点 id 格式的修改也影响到了边的 Id 以及源顶点和目标顶点 id 的格式。
EdgeId是由 src-vertex-id + direction + label + sort-values + tgt-vertex-id
拼接而成,
-但是这里的顶点id类型不是通过引号区分的,而是根据前缀区分:
- 当 id 类型为 number 时,EdgeId 的顶点 id 前有一个前缀
L
,形如 “L123456>1»L987654” - 当 id 类型为 string 时,EdgeId 的顶点 id 前有一个前缀
S
,形如 “S1:peter>1»S2:lop”
接下来的示例均假设已经创建好了前述的各种schema和vertex信息
2.2.1 创建一条边
Params说明
- label:边类型名称,必填
- outV:源顶点id,必填
- inV:目标顶点id,必填
- outVLabel:源顶点类型。必填
- inVLabel:目标顶点类型。必填
- properties: 边关联的属性,对象内部结构为:
- name:属性名称
- value:属性值
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges
-
Request Body
{
+但是这里的顶点id类型不是通过引号区分的,而是根据前缀区分:- 当 id 类型为 number 时,EdgeId 的顶点 id 前有一个前缀
L
,形如 “L123456>1»L987654” - 当 id 类型为 string 时,EdgeId 的顶点 id 前有一个前缀
S
,形如 “S1:peter>1»S2:lop”
接下来的示例均假设已经创建好了前述的各种schema和vertex信息
2.2.1 创建一条边
Params说明
- label:边类型名称,必填
- outV:源顶点id,必填
- inV:目标顶点id,必填
- outVLabel:源顶点类型。必填
- inVLabel:目标顶点类型。必填
- properties: 边关联的属性,对象内部结构为:
- name:属性名称
- value:属性值
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges
+
Request Body
{
"label": "created",
"outV": "1:peter",
"inV": "2:lop",
@@ -46,8 +46,8 @@
"weight": 0.2
}
}
-
2.2.2 创建多条边
Params
- check_vertex: 是否检查顶点存在(true | false),当设置为 true 而待插入边的源顶点或目标顶点不存在时会报错。
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges/batch
-
Request Body
[
+
2.2.2 创建多条边
Params
- check_vertex: 是否检查顶点存在(true | false),当设置为 true 而待插入边的源顶点或目标顶点不存在时会报错。
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/edges/batch
+
Request Body
[
{
"label": "created",
"outV": "1:peter",
@@ -76,8 +76,8 @@
"S1:peter>1>>S2:lop",
"S1:marko>2>>S1:vadas"
]
-
2.2.3 更新边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=append
-
Request Body
{
+
2.2.3 更新边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=append
+
Request Body
{
"properties": {
"weight": 1.0
}
@@ -126,8 +126,8 @@
}
]
}
-
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/edges/batch
-
Request Body
{
+
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/edges/batch
+
Request Body
{
"edges":[
{
"id":"S1:josh>2>>S2:ripple",
@@ -192,8 +192,8 @@
}
]
}
-
2.2.5 删除边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=eliminate
-
Request Body
{
+
2.2.5 删除边属性
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=eliminate
+
Request Body
{
"properties": {
"weight": 1.0
}
@@ -212,8 +212,8 @@
}
}
2.2.6 获取符合条件的边
Params
- vertex_id: 顶点id
- direction: 边的方向(OUT | IN | BOTH)
- label: 边的标签
- properties: 属性键值对(根据属性查询的前提是预先建立了索引)
- offset:偏移,默认为0
- limit: 查询数目,默认为100
- page: 页号
支持的查询有以下几种:
- 提供vertex_id参数时,不可以使用参数page,direction、label、properties可选,offset和limit可以
-限制结果范围
- 不提供vertex_id参数时,label和properties可选
- 如果使用page参数,则:offset参数不可用(不填或者为0),direction不可用,properties最多只能有一个
- 如果不使用page参数,则:offset和limit可以用来限制结果范围,direction参数忽略
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"weight":0.8}
,范围匹配时形如properties={"age":"P.gt(0.8)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的边 P.neq(number) 属性值不等于number的边 P.lt(number) 属性值小于number的边 P.lte(number) 属性值小于等于number的边 P.gt(number) 属性值大于number的边 P.gte(number) 属性值大于等于number的边 P.between(number1,number2) 属性值大于等于number1且小于number2的边 P.inside(number1,number2) 属性值大于number1且小于number2的边 P.outside(number1,number2) 属性值小于number1且大于number2的边 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的边
查询与顶点 person:josh(vertex_id=“1:josh”) 相连且 label 为 created 的边
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?vertex_id="1:josh"&direction=BOTH&label=created&properties={}
-
Response Status
200
+限制结果范围- 不提供vertex_id参数时,label和properties可选
- 如果使用page参数,则:offset参数不可用(不填或者为0),direction不可用,properties最多只能有一个
- 如果不使用page参数,则:offset和limit可以用来限制结果范围,direction参数忽略
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"weight":0.8}
,范围匹配时形如properties={"age":"P.gt(0.8)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的边 P.neq(number) 属性值不等于number的边 P.lt(number) 属性值小于number的边 P.lte(number) 属性值小于等于number的边 P.gt(number) 属性值大于number的边 P.gte(number) 属性值大于等于number的边 P.between(number1,number2) 属性值大于等于number1且小于number2的边 P.inside(number1,number2) 属性值大于number1且小于number2的边 P.outside(number1,number2) 属性值小于number1且大于number2的边 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的边
查询与顶点 person:josh(vertex_id=“1:josh”) 相连且 label 为 created 的边
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?vertex_id="1:josh"&direction=BOTH&label=created&properties={}
+
Response Status
200
Response Body
{
"edges": [
{
@@ -244,8 +244,8 @@
}
]
}
-
分页查询所有边,获取第一页(page不带参数值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page&limit=3
-
Response Status
200
+
分页查询所有边,获取第一页(page不带参数值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page&limit=3
+
Response Status
200
Response Body
{
"edges": [{
"id": "S1:peter>2>>S2:lop",
@@ -290,8 +290,8 @@
"page": "002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004"
}
返回的body里面是带有下一页的页号信息的,"page": "002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004"
,
-在查询下一页的时候将该值赋给page参数。
分页查询所有边,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page=002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004&limit=3
-
Response Status
200
+在查询下一页的时候将该值赋给page参数。分页查询所有边,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page=002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004&limit=3
+
Response Status
200
Response Body
{
"edges": [{
"id": "S1:marko>1>20130220>S1:josh",
@@ -335,8 +335,8 @@
],
"page": null
}
-
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.2.7 根据Id获取边
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
-
Response Status
200
+
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.2.7 根据Id获取边
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
+
Response Status
200
Response Body
{
"id": "S1:peter>1>>S2:lop",
"label": "created",
@@ -350,10 +350,10 @@
"weight": 0.2
}
}
-
2.2.8 根据Id删除边
Params
- label: 边类型,可选参数
仅根据Id删除边
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
-
Response Status
204
-
根据Label+Id删除边
通过指定Label参数和Id来删除边时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
-
Response Status
204
+
2.2.8 根据Id删除边
Params
- label: 边类型,可选参数
仅根据Id删除边
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
+
Response Status
204
+
根据Label+Id删除边
通过指定Label参数和Id来删除边时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
+
Response Status
204
Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
diff --git a/cn/docs/clients/restful-api/edgelabel/index.html b/cn/docs/clients/restful-api/edgelabel/index.html
index 906a50538..c3390c32f 100644
--- a/cn/docs/clients/restful-api/edgelabel/index.html
+++ b/cn/docs/clients/restful-api/edgelabel/index.html
@@ -16,8 +16,8 @@
Create child page
Create documentation issue
Create project issue
- Print entire sectionEdgeLabel API
1.4 EdgeLabel
假设已经创建好了1.2.3中的 PropertyKeys 和 1.3.3中的 VertexLabels
Params说明
- name:顶点类型名称,必填
- source_label: 源顶点类型的名称,必填
- target_label: 目标顶点类型的名称,必填
- frequency:两个点之间是否可以有多条边,可以取值SINGLE和MULTIPLE,非必填,默认值SINGLE
- properties: 边类型关联的属性类型,选填
- sort_keys: 当允许关联多次时,指定区分键属性列表
- nullable_keys:可为空的属性,选填,默认可为空
- enable_label_index: 是否开启类型索引,默认关闭
1.4.1 创建一个EdgeLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/edgelabels
-
Request Body
EdgeLabel API
1.4 EdgeLabel
假设已经创建好了1.2.3中的 PropertyKeys 和 1.3.3中的 VertexLabels
Params说明
- name:顶点类型名称,必填
- source_label: 源顶点类型的名称,必填
- target_label: 目标顶点类型的名称,必填
- frequency:两个点之间是否可以有多条边,可以取值SINGLE和MULTIPLE,非必填,默认值SINGLE
- properties: 边类型关联的属性类型,选填
- sort_keys: 当允许关联多次时,指定区分键属性列表
- nullable_keys:可为空的属性,选填,默认可为空
- enable_label_index: 是否开启类型索引,默认关闭
1.4.1 创建一个EdgeLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/edgelabels
+
Request Body
{
"name": "created",
"source_label": "person",
"target_label": "software",
@@ -89,8 +89,8 @@
"ttl_start_time": "createdTime",
"user_data": {}
}
-
1.4.2 为已存在的EdgeLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/edgelabels/created?action=append
-
Request Body
{
+
1.4.2 为已存在的EdgeLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/edgelabels/created?action=append
+
Request Body
{
"name": "created",
"properties": [
"weight"
@@ -120,8 +120,8 @@
"enable_label_index": true,
"user_data": {}
}
-
1.4.3 获取所有的EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels
-
Response Status
200
+
1.4.3 获取所有的EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels
+
Response Status
200
Response Body
{
"edgelabels": [
{
@@ -165,8 +165,8 @@
}
]
}
-
1.4.4 根据name获取EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
-
Response Status
200
+
1.4.4 根据name获取EdgeLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
+
Response Status
200
Response Body
{
"id": 1,
"sort_keys": [
@@ -189,8 +189,8 @@
"enable_label_index": true,
"user_data": {}
}
-
1.4.5 根据name删除EdgeLabel
删除 EdgeLabel 会导致删除对应的边以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
-
Response Status
202
+
1.4.5 根据name删除EdgeLabel
删除 EdgeLabel 会导致删除对应的边以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
+
Response Status
202
Response Body
{
"task_id": 1
}
diff --git a/cn/docs/clients/restful-api/graphs/index.html b/cn/docs/clients/restful-api/graphs/index.html
index fb1d63052..0c84960da 100644
--- a/cn/docs/clients/restful-api/graphs/index.html
+++ b/cn/docs/clients/restful-api/graphs/index.html
@@ -15,102 +15,102 @@
Create child page
Create documentation issue
Create project issue
- Print entire section
Graphs API
6.1 Graphs
6.1.1 列出数据库中全部的图
Method & Url
GET http://localhost:8080/graphs
-
Response Status
200
+ Print entire section
Graphs API
6.1 Graphs
6.1.1 列出数据库中全部的图
Method & Url
GET http://localhost:8080/graphs
+
Response Status
200
Response Body
{
"graphs": [
"hugegraph",
"hugegraph1"
]
}
-
6.1.2 查看某个图的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph
-
Response Status
200
+
6.1.2 查看某个图的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph
+
Response Status
200
Response Body
{
"name": "hugegraph",
"backend": "cassandra"
}
-
6.1.3 清空某个图的全部数据,包括schema、vertex、edge和index等,该操作需要管理员权限
Params
由于清空图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to delete all data
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/clear?confirm_message=I%27m+sure+to+delete+all+data
-
Response Status
204
-
6.1.4 克隆一个图,该操作需要管理员权限
Params
- clone_graph_name: 已有图的名称;从已有的图来克隆,用户可选择传递配置文件,传递时将替换已有图中的配置;
Method & Url
POST http://localhost:8080/graphs/hugegraph_clone?clone_graph_name=hugegraph
-
Request Body 【可选】
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-backend=rocksdb
-serializer=binary
-store=hugegraph_clone
-rocksdb.data_path=./hg2
-rocksdb.wal_path=./hg2
-
Response Status
200
+
6.1.3 清空某个图的全部数据,包括schema、vertex、edge和index等,该操作需要管理员权限
Params
由于清空图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to delete all data
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/clear?confirm_message=I%27m+sure+to+delete+all+data
+
Response Status
204
+
6.1.4 克隆一个图,该操作需要管理员权限
Params
- clone_graph_name: 已有图的名称;从已有的图来克隆,用户可选择传递配置文件,传递时将替换已有图中的配置;
Method & Url
POST http://localhost:8080/graphs/hugegraph_clone?clone_graph_name=hugegraph
+
Request Body 【可选】
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+backend=rocksdb
+serializer=binary
+store=hugegraph_clone
+rocksdb.data_path=./hg2
+rocksdb.wal_path=./hg2
+
Response Status
200
Response Body
{
"name": "hugegraph_clone",
"backend": "rocksdb"
}
-
6.1.5 创建一个图,该操作需要管理员权限
Method & Url
POST http://localhost:8080/graphs/hugegraph2
-
Request Body
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-backend=rocksdb
-serializer=binary
-store=hugegraph2
-rocksdb.data_path=./hg2
-rocksdb.wal_path=./hg2
-
Response Status
200
+
6.1.5 创建一个图,该操作需要管理员权限
Method & Url
POST http://localhost:8080/graphs/hugegraph2
+
Request Body
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+backend=rocksdb
+serializer=binary
+store=hugegraph2
+rocksdb.data_path=./hg2
+rocksdb.wal_path=./hg2
+
Response Status
200
Response Body
{
"name": "hugegraph2",
"backend": "rocksdb"
}
-
6.1.6 删除某个图及其全部数据
Params
由于删除图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to drop the graph
Method & Url
DELETE http://localhost:8080/graphs/hugegraph_clone?confirm_message=I%27m%20sure%20to%20drop%20the%20graph
-
Response Status
204
-
6.2 Conf
6.2.1 查看某个图的配置,该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/hugegraph/conf
-
Response Status
200
-
Response Body
# gremlin entrence to create graph
-gremlin.graph=com.baidu.hugegraph.HugeFactory
-
-# cache config
-#schema.cache_capacity=1048576
-#graph.cache_capacity=10485760
-#graph.cache_expire=600
-
-# schema illegal name template
-#schema.illegal_name_regex=\s+|~.*
-
-#vertex.default_label=vertex
-
-backend=cassandra
-serializer=cassandra
-
-store=hugegraph
-...
-
6.3 Mode
合法的图模式包括:NONE,RESTORING,MERGING,LOADING
- None 模式(默认),元数据和图数据的写入属于正常状态。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- LOADING:批量导入数据时自动启用,特别的:
- 添加顶点/边时,不会检查必填属性是否传入
Restore 时存在两种不同的模式: Restoring 和 Merging
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
6.3.1 查看某个图的模式.
Method & Url
GET http://localhost:8080/graphs/hugegraph/mode
-
Response Status
200
+
6.1.6 删除某个图及其全部数据
Params
由于删除图是一个比较危险的操作,为避免用户误调用,我们给API添加了用于确认的参数:
- confirm_message: 默认为
I'm sure to drop the graph
Method & Url
DELETE http://localhost:8080/graphs/hugegraph_clone?confirm_message=I%27m%20sure%20to%20drop%20the%20graph
+
Response Status
204
+
6.2 Conf
6.2.1 查看某个图的配置,该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/hugegraph/conf
+
Response Status
200
+
Response Body
# gremlin entrence to create graph
+gremlin.graph=com.baidu.hugegraph.HugeFactory
+
+# cache config
+#schema.cache_capacity=1048576
+#graph.cache_capacity=10485760
+#graph.cache_expire=600
+
+# schema illegal name template
+#schema.illegal_name_regex=\s+|~.*
+
+#vertex.default_label=vertex
+
+backend=cassandra
+serializer=cassandra
+
+store=hugegraph
+...
+
6.3 Mode
合法的图模式包括:NONE,RESTORING,MERGING,LOADING
- None 模式(默认),元数据和图数据的写入属于正常状态。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- LOADING:批量导入数据时自动启用,特别的:
- 添加顶点/边时,不会检查必填属性是否传入
Restore 时存在两种不同的模式: Restoring 和 Merging
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
6.3.1 查看某个图的模式.
Method & Url
GET http://localhost:8080/graphs/hugegraph/mode
+
Response Status
200
Response Body
{
"mode": "NONE"
}
-
合法的图模式包括:NONE,RESTORING,MERGING
6.3.2 设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/hugegraph/mode
-
Request Body
"RESTORING"
-
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
+
合法的图模式包括:NONE,RESTORING,MERGING
6.3.2 设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/hugegraph/mode
+
Request Body
"RESTORING"
+
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
Response Body
{
"mode": "RESTORING"
}
-
6.3.3 查看某个图的读模式.
Params
- name: 图的名称
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph_read_mode
-
Response Status
200
+
6.3.3 查看某个图的读模式.
Params
- name: 图的名称
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph_read_mode
+
Response Status
200
Response Body
{
"graph_read_mode": "ALL"
}
-
6.3.4 设置某个图的读模式. 该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
-
Request Body
"OLTP_ONLY"
-
合法的图模式包括:ALL,OLTP_ONLY,OLAP_ONLY
Response Status
200
+
6.3.4 设置某个图的读模式. 该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
+
Request Body
"OLTP_ONLY"
+
合法的图模式包括:ALL,OLTP_ONLY,OLAP_ONLY
Response Status
200
Response Body
{
"graph_read_mode": "OLTP_ONLY"
}
-
6.4 Snapshot
6.4.1 创建快照
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_create
-
Response Status
200
+
6.4 Snapshot
6.4.1 创建快照
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_create
+
Response Status
200
Response Body
{
"hugegraph": "snapshot_created"
}
-
6.4.2 快照恢复
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_resume
-
Response Status
200
+
6.4.2 快照恢复
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_resume
+
Response Status
200
Response Body
{
"hugegraph": "snapshot_resumed"
}
-
6.5 Compact
6.5.1 手动压缩图,该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/compact
-
Response Status
200
+
6.5 Compact
6.5.1 手动压缩图,该操作需要管理员权限
Params
- name: 图的名称
Method & Url
PUT http://localhost:8080/graphs/hugegraph/compact
+
Response Status
200
Response Body
{
"nodes": 1,
"cluster_id": "local",
diff --git a/cn/docs/clients/restful-api/gremlin/index.html b/cn/docs/clients/restful-api/gremlin/index.html
index 965269134..201446d68 100644
--- a/cn/docs/clients/restful-api/gremlin/index.html
+++ b/cn/docs/clients/restful-api/gremlin/index.html
@@ -12,8 +12,8 @@
Create child page
Create documentation issue
Create project issue
- Print entire section
Gremlin API
8.1 Gremlin
8.1.1 向HugeGraphServer发送gremlin语句(GET),同步执行
Params
- gremlin: 要发送给
HugeGraphServer
执行的gremlin
语句 - bindings: 用来绑定参数,key是字符串,value是绑定的值(只能是字符串或者数字),功能类似于MySQL的 Prepared Statement,用于加速语句执行
- language: 发送语句的语言类型,默认为
gremlin-groovy
- aliases: 为存在于图空间的已有变量添加别名
查询顶点
Method & Url
GET http://127.0.0.1:8080/gremlin?gremlin=hugegraph.traversal().V('1:marko')
-
Response Status
200
+ Print entire section
Gremlin API
8.1 Gremlin
8.1.1 向HugeGraphServer发送gremlin语句(GET),同步执行
Params
- gremlin: 要发送给
HugeGraphServer
执行的gremlin
语句 - bindings: 用来绑定参数,key是字符串,value是绑定的值(只能是字符串或者数字),功能类似于MySQL的 Prepared Statement,用于加速语句执行
- language: 发送语句的语言类型,默认为
gremlin-groovy
- aliases: 为存在于图空间的已有变量添加别名
查询顶点
Method & Url
GET http://127.0.0.1:8080/gremlin?gremlin=hugegraph.traversal().V('1:marko')
+
Response Status
200
Response Body
{
"requestId": "c6ef47a8-b634-4b07-9d38-6b3b69a3a556",
"status": {
@@ -44,8 +44,8 @@
"meta": {}
}
}
-
8.1.2 向HugeGraphServer发送gremlin语句(POST),同步执行
Method & Url
POST http://localhost:8080/gremlin
-
查询顶点
Request Body
{
+
8.1.2 向HugeGraphServer发送gremlin语句(POST),同步执行
Method & Url
POST http://localhost:8080/gremlin
+
查询顶点
Request Body
{
"gremlin": "hugegraph.traversal().V('1:marko')",
"bindings": {},
"language": "gremlin-groovy",
@@ -119,8 +119,8 @@
"meta": {}
}
}
-
8.1.3 向HugeGraphServer发送gremlin语句(POST),异步执行
Method & Url
POST http://localhost:8080/graphs/hugegraph/jobs/gremlin
-
查询顶点
Request Body
{
+
8.1.3 向HugeGraphServer发送gremlin语句(POST),异步执行
Method & Url
POST http://localhost:8080/graphs/hugegraph/jobs/gremlin
+
查询顶点
Request Body
{
"gremlin": "g.V('1:marko')",
"bindings": {},
"language": "gremlin-groovy",
diff --git a/cn/docs/clients/restful-api/index.xml b/cn/docs/clients/restful-api/index.xml
index 6e00f8204..e7414ec15 100644
--- a/cn/docs/clients/restful-api/index.xml
+++ b/cn/docs/clients/restful-api/index.xml
@@ -2,8 +2,10 @@
<h3 id="11-schema">1.1 Schema</h3>
<p>HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。</p>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/schema
-</code></pre><h5 id="response-status">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph_name}/schema
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span>e.g: GET http://localhost:8080/graphs/hugegraph/schema
+</span></span></code></pre></div><h5 id="response-status">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -11,15 +13,14 @@
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">7</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"price"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"data_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"INT"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"data_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"DOUBLE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"cardinality"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"SINGLE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"aggregate_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"NONE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"write_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"OLTP"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:40.741"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:05.316"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
@@ -29,11 +30,10 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"cardinality"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"SINGLE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"aggregate_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"NONE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"write_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"OLTP"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:40.729"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:05.309"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
@@ -43,11 +43,10 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"cardinality"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"SINGLE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"aggregate_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"NONE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"write_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"OLTP"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:40.691"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:05.287"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
@@ -57,11 +56,10 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"cardinality"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"SINGLE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"aggregate_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"NONE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"write_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"OLTP"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:40.678"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:05.280"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
@@ -71,11 +69,10 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"cardinality"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"SINGLE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"aggregate_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"NONE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"write_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"OLTP"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:40.718"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:05.301"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
@@ -85,11 +82,10 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"cardinality"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"SINGLE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"aggregate_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"NONE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"write_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"OLTP"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:40.707"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:05.294"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
@@ -99,11 +95,10 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"cardinality"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"SINGLE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"aggregate_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"NONE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"write_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"OLTP"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:40.609"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:05.250"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
@@ -116,9 +111,11 @@
</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"name"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"nullable_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"age"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"age"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"city"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"index_labels"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
+</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"personByAge"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"personByCity"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"personByAgeAndCity"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
@@ -131,19 +128,15 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"ttl"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">0</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"enable_label_index"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#204a87;font-weight:bold">true</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:40.783"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:05.336"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">2</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"software"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id_strategy"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"PRIMARY_KEY"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"primary_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"name"</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"nullable_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"price"</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id_strategy"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CUSTOMIZE_NUMBER"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"primary_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"nullable_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"index_labels"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"softwareByPrice"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
@@ -156,7 +149,7 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"ttl"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">0</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"enable_label_index"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#204a87;font-weight:bold">true</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:40.840"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:05.347"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
@@ -166,13 +159,9 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"knows"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"source_label"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"target_label"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"frequency"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"MULTIPLE"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"sort_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"date"</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"nullable_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"weight"</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"frequency"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"SINGLE"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"sort_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"nullable_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"index_labels"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"knowsByWeight"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
@@ -184,7 +173,7 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"ttl"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">0</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"enable_label_index"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#204a87;font-weight:bold">true</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:41.840"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:08.437"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
@@ -193,11 +182,8 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"source_label"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"target_label"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"software"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"frequency"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"SINGLE"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"sort_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"nullable_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"weight"</span>
-</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"sort_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"nullable_keys"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"index_labels"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"createdByDate"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"createdByWeight"</span>
@@ -210,13 +196,27 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"ttl"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">0</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"enable_label_index"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#204a87;font-weight:bold">true</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:41.868"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:08.446"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"indexlabels"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">1</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"personByAge"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"VERTEX_LABEL"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_value"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"index_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"RANGE_INT"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"fields"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
+</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"age"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:05.375"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
+</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
+</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">2</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"personByCity"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"VERTEX_LABEL"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_value"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
@@ -226,68 +226,68 @@
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:40.886"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:06.898"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">4</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"createdByDate"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"EDGE_LABEL"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_value"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"created"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">3</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"personByAgeAndCity"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"VERTEX_LABEL"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_value"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"index_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"SECONDARY"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"fields"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"date"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"age"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"city"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:41.878"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:07.407"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">5</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"createdByWeight"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"EDGE_LABEL"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_value"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"created"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">4</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"softwareByPrice"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"VERTEX_LABEL"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_value"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"software"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"index_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"RANGE_DOUBLE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"fields"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"weight"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"price"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:42.117"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:07.916"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">2</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"personByAgeAndCity"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"VERTEX_LABEL"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_value"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">5</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"createdByDate"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"EDGE_LABEL"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_value"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"created"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"index_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"SECONDARY"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"fields"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"age"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"city"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"date"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:41.351"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:08.454"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">3</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"softwareByPrice"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"VERTEX_LABEL"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_value"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"software"</span><span style="color:#000;font-weight:bold">,</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"index_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"RANGE_INT"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">6</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"createdByWeight"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"EDGE_LABEL"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_value"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"created"</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"index_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"RANGE_DOUBLE"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"fields"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
-</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"price"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"weight"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:41.587"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:08.963"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">},</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">6</span><span style="color:#000;font-weight:bold">,</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">7</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"knowsByWeight"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"EDGE_LABEL"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_value"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"knows"</span><span style="color:#000;font-weight:bold">,</span>
@@ -297,7 +297,7 @@
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">],</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"status"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"CREATED"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
-</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2021-09-03 15:13:42.376"</span>
+</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2023-05-08 17:49:09.473"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">]</span>
@@ -318,8 +318,8 @@
</ul>
<h4 id="121-创建一个-propertykey">1.2.1 创建一个 PropertyKey</h4>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/schema/propertykeys
-</code></pre><h5 id="request-body">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/schema/propertykeys
+</span></span></code></pre></div><h5 id="request-body">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"age"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"data_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"INT"</span><span style="color:#000;font-weight:bold">,</span>
@@ -350,8 +350,8 @@
<li>action: 表示当前行为是添加还是移除,取值为<code>append</code>(添加)和<code>eliminate</code>(移除)</li>
</ul>
<h5 id="method--url-1">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/schema/propertykeys/age?action=append
-</code></pre><h5 id="request-body-1">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/schema/propertykeys/age?action=append
+</span></span></code></pre></div><h5 id="request-body-1">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"age"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
@@ -382,8 +382,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="123-获取所有的-propertykey">1.2.3 获取所有的 PropertyKey</h4>
<h5 id="method--url-2">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/schema/propertykeys
-</code></pre><h5 id="response-status-2">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/schema/propertykeys
+</span></span></code></pre></div><h5 id="response-status-2">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-2">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -448,8 +448,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="124-根据name获取propertykey">1.2.4 根据name获取PropertyKey</h4>
<h5 id="method--url-3">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
-</code></pre><p>其中,<code>age</code>为要获取的PropertyKey的名字</p>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
+</span></span></code></pre></div><p>其中,<code>age</code>为要获取的 PropertyKey 的名称</p>
<h5 id="response-status-3">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-3">Response Body</h5>
@@ -468,10 +468,10 @@
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"~create_time"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"2022-05-13 13:47:23.745"</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
-</span></span></code></pre></div><h4 id="125-根据name删除propertykey">1.2.5 根据name删除PropertyKey</h4>
+</span></span></code></pre></div><h4 id="125-根据-name-删除-propertykey">1.2.5 根据 name 删除 PropertyKey</h4>
<h5 id="method--url-4">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
-</code></pre><p>其中,<code>age</code>为要获取的PropertyKey的名字</p>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
+</span></span></code></pre></div><p>其中,<code>age</code>为要删除的 PropertyKey 的名称</p>
<h5 id="response-status-4">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">202</span>
</span></span></code></pre></div><h5 id="response-body-4">Response Body</h5>
@@ -495,8 +495,8 @@
</ul>
<h4 id="131-创建一个vertexlabel">1.3.1 创建一个VertexLabel</h4>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/schema/vertexlabels
-</code></pre><h5 id="request-body">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+</span></span></code></pre></div><h5 id="request-body">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"id_strategy"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"DEFAULT"</span><span style="color:#000;font-weight:bold">,</span>
@@ -569,8 +569,8 @@
<li>action: 表示当前行为是添加还是移除,取值为<code>append</code>(添加)和<code>eliminate</code>(移除)</li>
</ul>
<h5 id="method--url-1">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person?action=append
-</code></pre><h5 id="request-body-1">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person?action=append
+</span></span></code></pre></div><h5 id="request-body-1">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
@@ -608,8 +608,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="133-获取所有的vertexlabel">1.3.3 获取所有的VertexLabel</h4>
<h5 id="method--url-2">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
-</code></pre><h5 id="response-status-2">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+</span></span></code></pre></div><h5 id="response-status-2">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-2">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -660,8 +660,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="134-根据name获取vertexlabel">1.3.4 根据name获取VertexLabel</h4>
<h5 id="method--url-3">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
-</code></pre><h5 id="response-status-3">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
+</span></span></code></pre></div><h5 id="response-status-3">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-3">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -689,8 +689,8 @@
</span></span></code></pre></div><h4 id="135-根据name删除vertexlabel">1.3.5 根据name删除VertexLabel</h4>
<p>删除 VertexLabel 会导致删除对应的顶点以及相关的索引数据,会产生一个异步任务</p>
<h5 id="method--url-4">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
-</code></pre><h5 id="response-status-4">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
+</span></span></code></pre></div><h5 id="response-status-4">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">202</span>
</span></span></code></pre></div><h5 id="response-body-4">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -715,8 +715,8 @@
</ul>
<h4 id="141-创建一个edgelabel">1.4.1 创建一个EdgeLabel</h4>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/schema/edgelabels
-</code></pre><h5 id="request-body">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/schema/edgelabels
+</span></span></code></pre></div><h5 id="request-body">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"created"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"source_label"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
@@ -799,8 +799,8 @@
<li>action: 表示当前行为是添加还是移除,取值为<code>append</code>(添加)和<code>eliminate</code>(移除)</li>
</ul>
<h5 id="method--url-1">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/schema/edgelabels/created?action=append
-</code></pre><h5 id="request-body-1">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/schema/edgelabels/created?action=append
+</span></span></code></pre></div><h5 id="request-body-1">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"created"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span>
@@ -835,8 +835,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="143-获取所有的edgelabel">1.4.3 获取所有的EdgeLabel</h4>
<h5 id="method--url-2">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/schema/edgelabels
-</code></pre><h5 id="response-status-2">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/schema/edgelabels
+</span></span></code></pre></div><h5 id="response-status-2">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-2">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -884,8 +884,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="144-根据name获取edgelabel">1.4.4 根据name获取EdgeLabel</h4>
<h5 id="method--url-3">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
-</code></pre><h5 id="response-status-3">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
+</span></span></code></pre></div><h5 id="response-status-3">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-3">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -913,8 +913,8 @@
</span></span></code></pre></div><h4 id="145-根据name删除edgelabel">1.4.5 根据name删除EdgeLabel</h4>
<p>删除 EdgeLabel 会导致删除对应的边以及相关的索引数据,会产生一个异步任务</p>
<h5 id="method--url-4">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
-</code></pre><h5 id="response-status-4">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
+</span></span></code></pre></div><h5 id="response-status-4">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">202</span>
</span></span></code></pre></div><h5 id="response-body-4">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -928,8 +928,8 @@
<p>假设已经创建好了1.1.3中的 PropertyKeys 、1.2.3中的 VertexLabels 以及 1.3.3中的 EdgeLabels</p>
<h4 id="151-创建一个indexlabel">1.5.1 创建一个IndexLabel</h4>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/schema/indexlabels
-</code></pre><h5 id="request-body">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/schema/indexlabels
+</span></span></code></pre></div><h5 id="request-body">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"personByCity"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"base_type"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"VERTEX_LABEL"</span><span style="color:#000;font-weight:bold">,</span>
@@ -957,8 +957,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="152-获取所有的indexlabel">1.5.2 获取所有的IndexLabel</h4>
<h5 id="method--url-1">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/schema/indexlabels
-</code></pre><h5 id="response-status-1">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/schema/indexlabels
+</span></span></code></pre></div><h5 id="response-status-1">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-1">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -1008,8 +1008,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="153-根据name获取indexlabel">1.5.3 根据name获取IndexLabel</h4>
<h5 id="method--url-2">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
-</code></pre><h5 id="response-status-2">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
+</span></span></code></pre></div><h5 id="response-status-2">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-2">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -1025,8 +1025,8 @@
</span></span></code></pre></div><h4 id="154-根据name删除indexlabel">1.5.4 根据name删除IndexLabel</h4>
<p>删除 IndexLabel 会导致删除相关的索引数据,会产生一个异步任务</p>
<h5 id="method--url-3">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
-</code></pre><h5 id="response-status-3">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
+</span></span></code></pre></div><h5 id="response-status-3">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">202</span>
</span></span></code></pre></div><h5 id="response-body-3">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -1039,8 +1039,8 @@
<h3 id="16-rebuild">1.6 Rebuild</h3>
<h4 id="161-重建indexlabel">1.6.1 重建IndexLabel</h4>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/indexlabels/personByCity
-</code></pre><h5 id="response-status">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/indexlabels/personByCity
+</span></span></code></pre></div><h5 id="response-status">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">202</span>
</span></span></code></pre></div><h5 id="response-body">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -1052,8 +1052,8 @@
</blockquote>
<h4 id="162-vertexlabel对应的全部索引重建">1.6.2 VertexLabel对应的全部索引重建</h4>
<h5 id="method--url-1">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/vertexlabels/person
-</code></pre><h5 id="response-status-1">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/vertexlabels/person
+</span></span></code></pre></div><h5 id="response-status-1">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">202</span>
</span></span></code></pre></div><h5 id="response-body-1">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -1065,8 +1065,8 @@
</blockquote>
<h4 id="163-edgelabel对应的全部索引重建">1.6.3 EdgeLabel对应的全部索引重建</h4>
<h5 id="method--url-2">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/edgelabels/created
-</code></pre><h5 id="response-status-2">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/edgelabels/created
+</span></span></code></pre></div><h5 id="response-status-2">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">202</span>
</span></span></code></pre></div><h5 id="response-body-2">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -1117,8 +1117,8 @@
<p>接下来的示例均假设已经创建好了前述的各种 schema 信息</p>
<h4 id="211-创建一个顶点">2.1.1 创建一个顶点</h4>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/graph/vertices
-</code></pre><h5 id="request-body">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/graph/vertices
+</span></span></code></pre></div><h5 id="request-body">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"label"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
@@ -1150,8 +1150,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="212-创建多个顶点">2.1.2 创建多个顶点</h4>
<h5 id="method--url-1">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/graph/vertices/batch
-</code></pre><h5 id="request-body-1">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/graph/vertices/batch
+</span></span></code></pre></div><h5 id="request-body-1">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">[</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"label"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
@@ -1178,8 +1178,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">]</span>
</span></span></code></pre></div><h4 id="213-更新顶点属性">2.1.3 更新顶点属性</h4>
<h5 id="method--url-2">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=append
-</code></pre><h5 id="request-body-2">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=append
+</span></span></code></pre></div><h5 id="request-body-2">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"label"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
@@ -1262,8 +1262,8 @@
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">]</span>
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h5 id="method--url-3">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/batch
-</code></pre><h5 id="request-body-3">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/batch
+</span></span></code></pre></div><h5 id="request-body-3">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"vertices"</span><span style="color:#000;font-weight:bold">:[</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
@@ -1342,8 +1342,8 @@
<p>其他的更新策略使用方式可以类推,不再赘述。</p>
<h4 id="215-删除顶点属性">2.1.5 删除顶点属性</h4>
<h5 id="method--url-4">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=eliminate
-</code></pre><h5 id="request-body-4">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=eliminate
+</span></span></code></pre></div><h5 id="request-body-4">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"label"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"person"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
@@ -1437,8 +1437,8 @@
</table>
<p><strong>查询所有 age 为 20 且 label 为 person 的顶点</strong></p>
<h5 id="method--url-5">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/graph/vertices?label=person&properties={"age":29}&limit=1
-</code></pre><h5 id="response-status-5">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/graph/vertices?label=person&properties={"age":29}&limit=1
+</span></span></code></pre></div><h5 id="response-status-5">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-5">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -1472,8 +1472,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><p><strong>分页查询所有顶点,获取第一页(page不带参数值),限定3条</strong></p>
<h5 id="method--url-6">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3
-</code></pre><h5 id="response-status-6">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3
+</span></span></code></pre></div><h5 id="response-status-6">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-6">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -1541,8 +1541,8 @@
在查询下一页的时候将该值赋给page参数。</p>
<p><strong>分页查询所有顶点,获取下一页(page带上上一页返回的page值),限定3条</strong></p>
<h5 id="method--url-7">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/graph/vertices?page=001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004&limit=3
-</code></pre><h5 id="response-status-7">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/graph/vertices?page=001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004&limit=3
+</span></span></code></pre></div><h5 id="response-status-7">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-7">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -1609,8 +1609,8 @@
</span></span></code></pre></div><p>此时<code>"page": null</code>表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 <code>page</code> 值可能非空,通过该 <code>page</code> 再请求下一页数据时则返回 <code>空数据</code> 及 <code>page = null</code>,其他情况类似)</p>
<h4 id="217-根据id获取顶点">2.1.7 根据Id获取顶点</h4>
<h5 id="method--url-8">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
-</code></pre><h5 id="response-status-8">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
+</span></span></code></pre></div><h5 id="response-status-8">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-8">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -1639,14 +1639,14 @@
</ul>
<p><strong>仅根据Id删除顶点</strong></p>
<h5 id="method--url-9">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
-</code></pre><h5 id="response-status-9">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
+</span></span></code></pre></div><h5 id="response-status-9">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div><p><strong>根据Label+Id删除顶点</strong></p>
<p>通过指定Label参数和Id来删除顶点时,一般来说其性能比仅根据Id删除会更好。</p>
<h5 id="method--url-10">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
-</code></pre><h5 id="response-status-10">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
+</span></span></code></pre></div><h5 id="response-status-10">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div>Docs: Edge API /cn/docs/clients/restful-api/edge/Mon, 01 Jan 0001 00:00:00 +0000 /cn/docs/clients/restful-api/edge/
<h3 id="22-edge">2.2 Edge</h3>
@@ -1675,8 +1675,8 @@
</li>
</ul>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/graph/edges
-</code></pre><h5 id="request-body">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/graph/edges
+</span></span></code></pre></div><h5 id="request-body">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"label"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"created"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"outV"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"1:peter"</span><span style="color:#000;font-weight:bold">,</span>
@@ -1710,8 +1710,8 @@
<li>check_vertex: 是否检查顶点存在(true | false),当设置为 true 而待插入边的源顶点或目标顶点不存在时会报错。</li>
</ul>
<h5 id="method--url-1">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/graph/edges/batch
-</code></pre><h5 id="request-body-1">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/graph/edges/batch
+</span></span></code></pre></div><h5 id="request-body-1">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">[</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"label"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"created"</span><span style="color:#000;font-weight:bold">,</span>
@@ -1745,8 +1745,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">]</span>
</span></span></code></pre></div><h4 id="223-更新边属性">2.2.3 更新边属性</h4>
<h5 id="method--url-2">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=append
-</code></pre><h5 id="request-body-2">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=append
+</span></span></code></pre></div><h5 id="request-body-2">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"weight"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">1.0</span>
@@ -1806,8 +1806,8 @@
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">]</span>
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h5 id="method--url-3">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://127.0.0.1:8080/graphs/hugegraph/graph/edges/batch
-</code></pre><h5 id="request-body-3">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://127.0.0.1:8080/graphs/hugegraph/graph/edges/batch
+</span></span></code></pre></div><h5 id="request-body-3">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"edges"</span><span style="color:#000;font-weight:bold">:[</span>
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">{</span>
@@ -1877,8 +1877,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="225-删除边属性">2.2.5 删除边属性</h4>
<h5 id="method--url-4">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=eliminate
-</code></pre><h5 id="request-body-4">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=eliminate
+</span></span></code></pre></div><h5 id="request-body-4">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"properties"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"weight"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#0000cf;font-weight:bold">1.0</span>
@@ -1977,8 +1977,8 @@
</table>
<p><strong>查询与顶点 person:josh(vertex_id=“1:josh”) 相连且 label 为 created 的边</strong></p>
<h5 id="method--url-5">Method & Url</h5>
-<pre tabindex="0"><code>GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?vertex_id="1:josh"&direction=BOTH&label=created&properties={}
-</code></pre><h5 id="response-status-5">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?vertex_id="1:josh"&direction=BOTH&label=created&properties={}
+</span></span></code></pre></div><h5 id="response-status-5">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-5">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -2013,8 +2013,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><p><strong>分页查询所有边,获取第一页(page不带参数值),限定3条</strong></p>
<h5 id="method--url-6">Method & Url</h5>
-<pre tabindex="0"><code>GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page&limit=3
-</code></pre><h5 id="response-status-6">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page&limit=3
+</span></span></code></pre></div><h5 id="response-status-6">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-6">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -2064,8 +2064,8 @@
在查询下一页的时候将该值赋给page参数。</p>
<p><strong>分页查询所有边,获取下一页(page带上上一页返回的page值),限定3条</strong></p>
<h5 id="method--url-7">Method & Url</h5>
-<pre tabindex="0"><code>GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page=002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004&limit=3
-</code></pre><h5 id="response-status-7">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page=002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004&limit=3
+</span></span></code></pre></div><h5 id="response-status-7">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-7">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -2114,8 +2114,8 @@
</span></span></code></pre></div><p>此时<code>"page": null</code>表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 <code>page</code> 值可能非空,通过该 <code>page</code> 再请求下一页数据时则返回 <code>空数据</code> 及 <code>page = null</code>,其他情况类似)</p>
<h4 id="227-根据id获取边">2.2.7 根据Id获取边</h4>
<h5 id="method--url-8">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
-</code></pre><h5 id="response-status-8">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
+</span></span></code></pre></div><h5 id="response-status-8">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-8">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -2138,14 +2138,14 @@
</ul>
<p><strong>仅根据Id删除边</strong></p>
<h5 id="method--url-9">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
-</code></pre><h5 id="response-status-9">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
+</span></span></code></pre></div><h5 id="response-status-9">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div><p><strong>根据Label+Id删除边</strong></p>
<p>通过指定Label参数和Id来删除边时,一般来说其性能比仅根据Id删除会更好。</p>
<h5 id="method--url-10">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
-</code></pre><h5 id="response-status-10">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
+</span></span></code></pre></div><h5 id="response-status-10">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div> Docs: Traverser API /cn/docs/clients/restful-api/traverser/Mon, 01 Jan 0001 00:00:00 +0000 /cn/docs/clients/restful-api/traverser/
<h3 id="31-traverser-api概述">3.1 traverser API概述</h3>
@@ -2328,20 +2328,20 @@
</span></span><span style="display:flex;"><span> <span style="color:#ce5c00;font-weight:bold">}</span>
</span></span><span style="display:flex;"><span><span style="color:#ce5c00;font-weight:bold">}</span>
</span></span></code></pre></div><p>顶点ID为:</p>
-<pre tabindex="0"><code>"2:ripple",
-"1:vadas",
-"1:peter",
-"1:josh",
-"1:marko",
-"2:lop"
-</code></pre><p>边ID为:</p>
-<pre tabindex="0"><code>"S1:peter>2>>S2:lop",
-"S1:josh>2>>S2:lop",
-"S1:josh>2>>S2:ripple",
-"S1:marko>1>20130220>S1:josh",
-"S1:marko>1>20160110>S1:vadas",
-"S1:marko>2>>S2:lop"
-</code></pre><h4 id="321-k-out-apiget基础版">3.2.1 K-out API(GET,基础版)</h4>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>"2:ripple",
+</span></span><span style="display:flex;"><span>"1:vadas",
+</span></span><span style="display:flex;"><span>"1:peter",
+</span></span><span style="display:flex;"><span>"1:josh",
+</span></span><span style="display:flex;"><span>"1:marko",
+</span></span><span style="display:flex;"><span>"2:lop"
+</span></span></code></pre></div><p>边ID为:</p>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>"S1:peter>2>>S2:lop",
+</span></span><span style="display:flex;"><span>"S1:josh>2>>S2:lop",
+</span></span><span style="display:flex;"><span>"S1:josh>2>>S2:ripple",
+</span></span><span style="display:flex;"><span>"S1:marko>1>20130220>S1:josh",
+</span></span><span style="display:flex;"><span>"S1:marko>1>20160110>S1:vadas",
+</span></span><span style="display:flex;"><span>"S1:marko>2>>S2:lop"
+</span></span></code></pre></div><h4 id="321-k-out-apiget基础版">3.2.1 K-out API(GET,基础版)</h4>
<h5 id="3211-功能介绍">3.2.1.1 功能介绍</h5>
<p>根据起始顶点、方向、边的类型(可选)和深度depth,查找从起始顶点出发恰好depth步可达的顶点</p>
<h6 id="params">Params</h6>
@@ -2357,8 +2357,8 @@
</ul>
<h5 id="3212-使用方法">3.2.1.2 使用方法</h5>
<h6 id="method--url">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/{graph}/traversers/kout?source="1:marko"&max_depth=2
-</code></pre><h6 id="response-status">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph}/traversers/kout?source="1:marko"&max_depth=2
+</span></span></code></pre></div><h6 id="response-status">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -2416,8 +2416,8 @@
</ul>
<h5 id="3222-使用方法">3.2.2.2 使用方法</h5>
<h6 id="method--url-1">Method & Url</h6>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/{graph}/traversers/kout
-</code></pre><h6 id="request-body">Request Body</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/{graph}/traversers/kout
+</span></span></code></pre></div><h6 id="request-body">Request Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"source"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"1:marko"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"step"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
@@ -2527,8 +2527,8 @@
</ul>
<h5 id="3232-使用方法">3.2.3.2 使用方法</h5>
<h6 id="method--url-2">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/{graph}/traversers/kneighbor?source=“1:marko”&max_depth=2
-</code></pre><h6 id="response-status-2">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph}/traversers/kneighbor?source=“1:marko”&max_depth=2
+</span></span></code></pre></div><h6 id="response-status-2">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-2">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -2588,8 +2588,8 @@
</ul>
<h5 id="3242-使用方法">3.2.4.2 使用方法</h5>
<h6 id="method--url-3">Method & Url</h6>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/{graph}/traversers/kneighbor
-</code></pre><h6 id="request-body-1">Request Body</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/{graph}/traversers/kneighbor
+</span></span></code></pre></div><h6 id="request-body-1">Request Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"source"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"1:marko"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"step"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
@@ -2737,8 +2737,8 @@
</ul>
<h5 id="3252-使用方法">3.2.5.2 使用方法</h5>
<h6 id="method--url-4">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/{graph}/traversers/sameneighbors?vertex=“1:marko”&other="1:josh"
-</code></pre><h6 id="response-status-4">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph}/traversers/sameneighbors?vertex=“1:marko”&other="1:josh"
+</span></span></code></pre></div><h6 id="response-status-4">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-4">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -2764,8 +2764,8 @@
</ul>
<h5 id="3262-使用方法">3.2.6.2 使用方法</h5>
<h6 id="method--url-5">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity?vertex="1:marko"&other="1:josh"
-</code></pre><h6 id="response-status-5">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity?vertex="1:marko"&other="1:josh"
+</span></span></code></pre></div><h6 id="response-status-5">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-5">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -2796,8 +2796,8 @@
</ul>
<h5 id="3272-使用方法">3.2.7.2 使用方法</h5>
<h6 id="method--url-6">Method & Url</h6>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity
-</code></pre><h6 id="request-body-2">Request Body</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity
+</span></span></code></pre></div><h6 id="request-body-2">Request Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"vertex"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"1:marko"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"step"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
@@ -2834,8 +2834,8 @@
</ul>
<h5 id="3282-使用方法">3.2.8.2 使用方法</h5>
<h6 id="method--url-7">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/{graph}/traversers/shortestpath?source="1:marko"&target="2:ripple"&max_depth=3
-</code></pre><h6 id="response-status-7">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph}/traversers/shortestpath?source="1:marko"&target="2:ripple"&max_depth=3
+</span></span></code></pre></div><h6 id="response-status-7">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-7">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -2867,8 +2867,8 @@
</ul>
<h5 id="3292-使用方法">3.2.9.2 使用方法</h5>
<h6 id="method--url-8">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/{graph}/traversers/allshortestpaths?source="A"&target="Z"&max_depth=10
-</code></pre><h6 id="response-status-8">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph}/traversers/allshortestpaths?source="A"&target="Z"&max_depth=10
+</span></span></code></pre></div><h6 id="response-status-8">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-8">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -2914,8 +2914,8 @@
</ul>
<h5 id="32102-使用方法">3.2.10.2 使用方法</h5>
<h6 id="method--url-9">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/{graph}/traversers/weightedshortestpath?source="1:marko"&target="2:ripple"&weight="weight"&with_vertex=true
-</code></pre><h6 id="response-status-9">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph}/traversers/weightedshortestpath?source="1:marko"&target="2:ripple"&weight="weight"&with_vertex=true
+</span></span></code></pre></div><h6 id="response-status-9">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-9">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -2982,8 +2982,8 @@
</ul>
<h5 id="32112-使用方法">3.2.11.2 使用方法</h5>
<h6 id="method--url-10">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/{graph}/traversers/singlesourceshortestpath?source="1:marko"&with_vertex=true
-</code></pre><h6 id="response-status-10">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph}/traversers/singlesourceshortestpath?source="1:marko"&with_vertex=true
+</span></span></code></pre></div><h6 id="response-status-10">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-10">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -3128,8 +3128,8 @@
</ul>
<h5 id="32122-使用方法">3.2.12.2 使用方法</h5>
<h6 id="method--url-11">Method & Url</h6>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/{graph}/traversers/multinodeshortestpath
-</code></pre><h6 id="request-body-3">Request Body</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/{graph}/traversers/multinodeshortestpath
+</span></span></code></pre></div><h6 id="request-body-3">Request Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"vertices"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"ids"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span><span style="color:#4e9a06">"382:marko"</span><span style="color:#000;font-weight:bold">,</span> <span style="color:#4e9a06">"382:josh"</span><span style="color:#000;font-weight:bold">,</span> <span style="color:#4e9a06">"382:vadas"</span><span style="color:#000;font-weight:bold">,</span> <span style="color:#4e9a06">"382:peter"</span><span style="color:#000;font-weight:bold">,</span> <span style="color:#4e9a06">"383:lop"</span><span style="color:#000;font-weight:bold">,</span> <span style="color:#4e9a06">"383:ripple"</span><span style="color:#000;font-weight:bold">]</span>
@@ -3335,8 +3335,8 @@
</ul>
<h5 id="32132-使用方法">3.2.13.2 使用方法</h5>
<h6 id="method--url-12">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/{graph}/traversers/paths?source="1:marko"&target="1:josh"&max_depth=5
-</code></pre><h6 id="response-status-12">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph}/traversers/paths?source="1:marko"&target="1:josh"&max_depth=5
+</span></span></code></pre></div><h6 id="response-status-12">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-12">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -3412,8 +3412,8 @@
</ul>
<h5 id="32142-使用方法">3.2.14.2 使用方法</h5>
<h6 id="method--url-13">Method & Url</h6>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/{graph}/traversers/paths
-</code></pre><h6 id="request-body-4">Request Body</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/{graph}/traversers/paths
+</span></span></code></pre></div><h6 id="request-body-4">Request Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">"sources"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"ids"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[</span><span style="color:#4e9a06">"1:marko"</span><span style="color:#000;font-weight:bold">]</span>
@@ -3503,8 +3503,8 @@
</ul>
<h5 id="32152-使用方法">3.2.15.2 使用方法</h5>
<h6 id="method--url-14">Method & Url</h6>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/{graph}/traversers/customizedpaths
-</code></pre><h6 id="request-body-5">Request Body</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/{graph}/traversers/customizedpaths
+</span></span></code></pre></div><h6 id="request-body-5">Request Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"sources"</span><span style="color:#000;font-weight:bold">:{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"ids"</span><span style="color:#000;font-weight:bold">:[</span>
@@ -3690,8 +3690,8 @@
</ul>
<h5 id="32162-使用方法">3.2.16.2 使用方法</h5>
<h6 id="method--url-15">Method & Url</h6>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/{graph}/traversers/templatepaths
-</code></pre><h6 id="request-body-6">Request Body</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/{graph}/traversers/templatepaths
+</span></span></code></pre></div><h6 id="request-body-6">Request Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"sources"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"ids"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#000;font-weight:bold">[],</span>
@@ -3831,8 +3831,8 @@
</ul>
<h5 id="32172-使用方法">3.2.17.2 使用方法</h5>
<h6 id="method--url-16">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/{graph}/traversers/crosspoints?source="2:lop"&target="2:ripple"&max_depth=5&direction=IN
-</code></pre><h6 id="response-status-16">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph}/traversers/crosspoints?source="2:lop"&target="2:ripple"&max_depth=5&direction=IN
+</span></span></code></pre></div><h6 id="response-status-16">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-16">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -3911,8 +3911,8 @@
</ul>
<h5 id="32182-使用方法">3.2.18.2 使用方法</h5>
<h6 id="method--url-17">Method & Url</h6>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/{graph}/traversers/customizedcrosspoints
-</code></pre><h6 id="request-body-7">Request Body</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/{graph}/traversers/customizedcrosspoints
+</span></span></code></pre></div><h6 id="request-body-7">Request Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"sources"</span><span style="color:#000;font-weight:bold">:{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"ids"</span><span style="color:#000;font-weight:bold">:[</span>
@@ -4059,8 +4059,8 @@
</ul>
<h5 id="32192-使用方法">3.2.19.2 使用方法</h5>
<h6 id="method--url-18">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/{graph}/traversers/rings?source="1:marko"&max_depth=2
-</code></pre><h6 id="response-status-18">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph}/traversers/rings?source="1:marko"&max_depth=2
+</span></span></code></pre></div><h6 id="response-status-18">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-18">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -4110,8 +4110,8 @@
</ul>
<h5 id="32202-使用方法">3.2.20.2 使用方法</h5>
<h6 id="method--url-19">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/{graph}/traversers/rays?source="1:marko"&max_depth=2&direction=OUT
-</code></pre><h6 id="response-status-19">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/{graph}/traversers/rays?source="1:marko"&max_depth=2&direction=OUT
+</span></span></code></pre></div><h6 id="response-status-19">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-19">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -4216,8 +4216,8 @@
</ul>
<h5 id="32212-使用方法">3.2.21.2 使用方法</h5>
<h6 id="method--url-20">Method & Url</h6>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/traversers/fusiformsimilarity
-</code></pre><h6 id="request-body-8">Request Body</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/traversers/fusiformsimilarity
+</span></span></code></pre></div><h6 id="request-body-8">Request Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"sources"</span><span style="color:#000;font-weight:bold">:{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"ids"</span><span style="color:#000;font-weight:bold">:[],</span>
@@ -4303,8 +4303,8 @@
<li>ids:要查询的顶点id列表</li>
</ul>
<h6 id="method--url-21">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/traversers/vertices?ids="1:marko"&ids="2:lop"
-</code></pre><h6 id="response-status-21">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/traversers/vertices?ids="1:marko"&ids="2:lop"
+</span></span></code></pre></div><h6 id="response-status-21">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-21">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -4368,8 +4368,8 @@
<li>split_size:分片大小,必填项</li>
</ul>
<h6 id="method--url-22">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/traversers/vertices/shards?split_size=67108864
-</code></pre><h6 id="response-status-22">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/traversers/vertices/shards?split_size=67108864
+</span></span></code></pre></div><h6 id="response-status-22">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-22">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -4407,8 +4407,8 @@
<li>page_limit:分页获取顶点时,一页中顶点数目的上限,选填项,默认为100000</li>
</ul>
<h6 id="method--url-23">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/traversers/vertices/scan?start=0&end=4294967295
-</code></pre><h6 id="response-status-23">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/traversers/vertices/scan?start=0&end=4294967295
+</span></span></code></pre></div><h6 id="response-status-23">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-23">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -4577,8 +4577,8 @@
<li>ids:要查询的边id列表</li>
</ul>
<h6 id="method--url-24">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/traversers/edges?ids="S1:josh>1>>S2:lop"&ids="S1:josh>1>>S2:ripple"
-</code></pre><h6 id="response-status-24">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/traversers/edges?ids="S1:josh>1>>S2:lop"&ids="S1:josh>1>>S2:ripple"
+</span></span></code></pre></div><h6 id="response-status-24">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-24">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -4618,8 +4618,8 @@
<li>split_size:分片大小,必填项</li>
</ul>
<h6 id="method--url-25">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/traversers/edges/shards?split_size=4294967295
-</code></pre><h6 id="response-status-25">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/traversers/edges/shards?split_size=4294967295
+</span></span></code></pre></div><h6 id="response-status-25">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-25">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -4661,8 +4661,8 @@
<li>page_limit:分页获取边时,一页中边数目的上限,选填项,默认为100000</li>
</ul>
<h6 id="method--url-26">Method & Url</h6>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/traversers/edges/scan?start=0&end=3221225469
-</code></pre><h6 id="response-status-26">Response Status</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/traversers/edges/scan?start=0&end=3221225469
+</span></span></code></pre></div><h6 id="response-status-26">Response Status</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h6 id="response-body-26">Response Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -4893,8 +4893,8 @@
</ul>
<h5 id="4212-使用方法">4.2.1.2 使用方法</h5>
<h6 id="method--url">Method & Url</h6>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
-</code></pre><h6 id="request-body">Request Body</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
+</span></span></code></pre></div><h6 id="request-body">Request Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"source"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"1:1"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"label"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"rating"</span><span style="color:#000;font-weight:bold">,</span>
@@ -5026,8 +5026,8 @@
</ul>
<h5 id="4222-使用方法">4.2.2.2 使用方法</h5>
<h6 id="method--url-1">Method & Url</h6>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
-</code></pre><h6 id="request-body-1">Request Body</h6>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
+</span></span></code></pre></div><h6 id="request-body-1">Request Body</h6>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"source"</span><span style="color:#000;font-weight:bold">:</span><span style="color:#4e9a06">"O"</span><span style="color:#000;font-weight:bold">,</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"steps"</span><span style="color:#000;font-weight:bold">:[</span>
@@ -5097,8 +5097,8 @@
<p>Variables可以用来存储有关整个图的数据,数据按照键值对的方式存取</p>
<h4 id="511-创建或者更新某个键值对">5.1.1 创建或者更新某个键值对</h4>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/variables/name
-</code></pre><h5 id="request-body">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/variables/name
+</span></span></code></pre></div><h5 id="request-body">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"data"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"tom"</span>
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
@@ -5110,8 +5110,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="512-列出全部键值对">5.1.2 列出全部键值对</h4>
<h5 id="method--url-1">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/variables
-</code></pre><h5 id="response-status-1">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/variables
+</span></span></code></pre></div><h5 id="response-status-1">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-1">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5119,8 +5119,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="513-列出某个键值对">5.1.3 列出某个键值对</h4>
<h5 id="method--url-2">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/variables/name
-</code></pre><h5 id="response-status-2">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/variables/name
+</span></span></code></pre></div><h5 id="response-status-2">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-2">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5128,15 +5128,15 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="514-删除某个键值对">5.1.4 删除某个键值对</h4>
<h5 id="method--url-3">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/variables/name
-</code></pre><h5 id="response-status-3">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/variables/name
+</span></span></code></pre></div><h5 id="response-status-3">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div> Docs: Graphs API /cn/docs/clients/restful-api/graphs/Mon, 01 Jan 0001 00:00:00 +0000 /cn/docs/clients/restful-api/graphs/
<h3 id="61-graphs">6.1 Graphs</h3>
<h4 id="611-列出数据库中全部的图">6.1.1 列出数据库中全部的图</h4>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs
-</code></pre><h5 id="response-status">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs
+</span></span></code></pre></div><h5 id="response-status">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5147,8 +5147,8 @@
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="612-查看某个图的信息">6.1.2 查看某个图的信息</h4>
<h5 id="method--url-1">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph
-</code></pre><h5 id="response-status-1">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph
+</span></span></code></pre></div><h5 id="response-status-1">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-1">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5162,8 +5162,8 @@
<li>confirm_message: 默认为<code>I'm sure to delete all data</code></li>
</ul>
<h5 id="method--url-2">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/clear?confirm_message=I%27m+sure+to+delete+all+data
-</code></pre><h5 id="response-status-2">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/clear?confirm_message=I%27m+sure+to+delete+all+data
+</span></span></code></pre></div><h5 id="response-status-2">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div><h4 id="614-克隆一个图该操作需要管理员权限">6.1.4 克隆一个图,<strong>该操作需要管理员权限</strong></h4>
<h5 id="params-1">Params</h5>
@@ -5171,15 +5171,15 @@
<li>clone_graph_name: 已有图的名称;从已有的图来克隆,用户可选择传递配置文件,传递时将替换已有图中的配置;</li>
</ul>
<h5 id="method--url-3">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph_clone?clone_graph_name=hugegraph
-</code></pre><h5 id="request-body-可选">Request Body 【可选】</h5>
-<pre tabindex="0"><code>gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-backend=rocksdb
-serializer=binary
-store=hugegraph_clone
-rocksdb.data_path=./hg2
-rocksdb.wal_path=./hg2
-</code></pre><h5 id="response-status-3">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph_clone?clone_graph_name=hugegraph
+</span></span></code></pre></div><h5 id="request-body-可选">Request Body 【可选】</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+</span></span><span style="display:flex;"><span>backend=rocksdb
+</span></span><span style="display:flex;"><span>serializer=binary
+</span></span><span style="display:flex;"><span>store=hugegraph_clone
+</span></span><span style="display:flex;"><span>rocksdb.data_path=./hg2
+</span></span><span style="display:flex;"><span>rocksdb.wal_path=./hg2
+</span></span></code></pre></div><h5 id="response-status-3">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-2">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5188,15 +5188,15 @@ rocksdb.wal_path=./hg2
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="615-创建一个图该操作需要管理员权限">6.1.5 创建一个图,<strong>该操作需要管理员权限</strong></h4>
<h5 id="method--url-4">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph2
-</code></pre><h5 id="request-body">Request Body</h5>
-<pre tabindex="0"><code>gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-backend=rocksdb
-serializer=binary
-store=hugegraph2
-rocksdb.data_path=./hg2
-rocksdb.wal_path=./hg2
-</code></pre><h5 id="response-status-4">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph2
+</span></span></code></pre></div><h5 id="request-body">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+</span></span><span style="display:flex;"><span>backend=rocksdb
+</span></span><span style="display:flex;"><span>serializer=binary
+</span></span><span style="display:flex;"><span>store=hugegraph2
+</span></span><span style="display:flex;"><span>rocksdb.data_path=./hg2
+</span></span><span style="display:flex;"><span>rocksdb.wal_path=./hg2
+</span></span></code></pre></div><h5 id="response-status-4">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-3">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5210,30 +5210,35 @@ rocksdb.wal_path=./hg2
<li>confirm_message: 默认为<code>I'm sure to drop the graph</code></li>
</ul>
<h5 id="method--url-5">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph_clone?confirm_message=I%27m%20sure%20to%20drop%20the%20graph
-</code></pre><h5 id="response-status-5">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph_clone?confirm_message=I%27m%20sure%20to%20drop%20the%20graph
+</span></span></code></pre></div><h5 id="response-status-5">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div><h3 id="62-conf">6.2 Conf</h3>
<h4 id="621-查看某个图的配置该操作需要管理员权限">6.2.1 查看某个图的配置,<strong>该操作需要管理员权限</strong></h4>
<h5 id="method--url-6">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/conf
-</code></pre><h5 id="response-status-6">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/conf
+</span></span></code></pre></div><h5 id="response-status-6">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-4">Response Body</h5>
-<pre tabindex="0"><code class="language-properties" data-lang="properties"># gremlin entrence to create graph
-gremlin.graph=com.baidu.hugegraph.HugeFactory
-# cache config
-#schema.cache_capacity=1048576
-#graph.cache_capacity=10485760
-#graph.cache_expire=600
-# schema illegal name template
-#schema.illegal_name_regex=\s+|~.*
-#vertex.default_label=vertex
-backend=cassandra
-serializer=cassandra
-store=hugegraph
-...
-</code></pre><h3 id="63-mode">6.3 Mode</h3>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span># gremlin entrence to create graph
+</span></span><span style="display:flex;"><span>gremlin.graph=com.baidu.hugegraph.HugeFactory
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># cache config
+</span></span><span style="display:flex;"><span>#schema.cache_capacity=1048576
+</span></span><span style="display:flex;"><span>#graph.cache_capacity=10485760
+</span></span><span style="display:flex;"><span>#graph.cache_expire=600
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># schema illegal name template
+</span></span><span style="display:flex;"><span>#schema.illegal_name_regex=\s+|~.*
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span>#vertex.default_label=vertex
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span>backend=cassandra
+</span></span><span style="display:flex;"><span>serializer=cassandra
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span>store=hugegraph
+</span></span><span style="display:flex;"><span>...
+</span></span></code></pre></div><h3 id="63-mode">6.3 Mode</h3>
<p>合法的图模式包括:NONE,RESTORING,MERGING,LOADING</p>
<ul>
<li>None 模式(默认),元数据和图数据的写入属于正常状态。特别的:
@@ -5266,8 +5271,8 @@ store=hugegraph
<p>正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。</p>
<h4 id="631-查看某个图的模式">6.3.1 查看某个图的模式.</h4>
<h5 id="method--url-7">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/mode
-</code></pre><h5 id="response-status-7">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/mode
+</span></span></code></pre></div><h5 id="response-status-7">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-5">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5278,10 +5283,10 @@ store=hugegraph
</blockquote>
<h4 id="632-设置某个图的模式-该操作需要管理员权限">6.3.2 设置某个图的模式. <strong>该操作需要管理员权限</strong></h4>
<h5 id="method--url-8">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/mode
-</code></pre><h5 id="request-body-1">Request Body</h5>
-<pre tabindex="0"><code>"RESTORING"
-</code></pre><blockquote>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/mode
+</span></span></code></pre></div><h5 id="request-body-1">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>"RESTORING"
+</span></span></code></pre></div><blockquote>
<p>合法的图模式包括:NONE,RESTORING,MERGING</p>
</blockquote>
<h5 id="response-status-8">Response Status</h5>
@@ -5296,8 +5301,8 @@ store=hugegraph
<li>name: 图的名称</li>
</ul>
<h5 id="method--url-9">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/graph_read_mode
-</code></pre><h5 id="response-status-9">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/graph_read_mode
+</span></span></code></pre></div><h5 id="response-status-9">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-7">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5309,10 +5314,10 @@ store=hugegraph
<li>name: 图的名称</li>
</ul>
<h5 id="method--url-10">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
-</code></pre><h5 id="request-body-2">Request Body</h5>
-<pre tabindex="0"><code>"OLTP_ONLY"
-</code></pre><blockquote>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
+</span></span></code></pre></div><h5 id="request-body-2">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>"OLTP_ONLY"
+</span></span></code></pre></div><blockquote>
<p>合法的图模式包括:ALL,OLTP_ONLY,OLAP_ONLY</p>
</blockquote>
<h5 id="response-status-10">Response Status</h5>
@@ -5328,8 +5333,8 @@ store=hugegraph
<li>name: 图的名称</li>
</ul>
<h5 id="method--url-11">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/snapshot_create
-</code></pre><h5 id="response-status-11">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/snapshot_create
+</span></span></code></pre></div><h5 id="response-status-11">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-9">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5341,8 +5346,8 @@ store=hugegraph
<li>name: 图的名称</li>
</ul>
<h5 id="method--url-12">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/snapshot_resume
-</code></pre><h5 id="response-status-12">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/snapshot_resume
+</span></span></code></pre></div><h5 id="response-status-12">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-10">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5355,8 +5360,8 @@ store=hugegraph
<li>name: 图的名称</li>
</ul>
<h5 id="method--url-13">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/compact
-</code></pre><h5 id="response-status-13">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/compact
+</span></span></code></pre></div><h5 id="response-status-13">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-11">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5375,8 +5380,8 @@ store=hugegraph
<li>limit:返回异步任务数目上限</li>
</ul>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/tasks?status=success
-</code></pre><h5 id="response-status">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/tasks?status=success
+</span></span></code></pre></div><h5 id="response-status">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5396,8 +5401,8 @@ store=hugegraph
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="712-查看某个异步任务的信息">7.1.2 查看某个异步任务的信息</h4>
<h5 id="method--url-1">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/tasks/2
-</code></pre><h5 id="response-status-1">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/tasks/2
+</span></span></code></pre></div><h5 id="response-status-1">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-1">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5415,8 +5420,8 @@ store=hugegraph
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="713-删除某个异步任务信息不删除异步任务本身">7.1.3 删除某个异步任务信息,<strong>不删除异步任务本身</strong></h4>
<h5 id="method--url-2">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/tasks/2
-</code></pre><h5 id="response-status-2">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/tasks/2
+</span></span></code></pre></div><h5 id="response-status-2">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div><h4 id="714-取消某个异步任务该异步任务必须具有处理中断的能力">7.1.4 取消某个异步任务,<strong>该异步任务必须具有处理中断的能力</strong></h4>
<p>假设已经通过<a href="../gremlin">Gremlin API</a>创建了一个异步任务如下:</p>
@@ -5430,8 +5435,8 @@ store=hugegraph
</span></span><span style="display:flex;"><span> <span style="color:#4e9a06">"}"</span> <span style="color:#ce5c00;font-weight:bold">+</span>
</span></span><span style="display:flex;"><span><span style="color:#4e9a06">"}"</span>
</span></span></code></pre></div><h5 id="method--url-3">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
-</code></pre><blockquote>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
+</span></span></code></pre></div><blockquote>
<p>请保证在10秒内发送该请求,如果超过10秒发送,任务可能已经执行完成,无法取消。</p>
</blockquote>
<h5 id="response-status-3">Response Status</h5>
@@ -5452,8 +5457,8 @@ store=hugegraph
</ul>
<p><strong>查询顶点</strong></p>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>GET http://127.0.0.1:8080/gremlin?gremlin=hugegraph.traversal().V('1:marko')
-</code></pre><h5 id="response-status">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://127.0.0.1:8080/gremlin?gremlin=hugegraph.traversal().V('1:marko')
+</span></span></code></pre></div><h5 id="response-status">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5488,8 +5493,8 @@ store=hugegraph
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="812-向hugegraphserver发送gremlin语句post同步执行">8.1.2 向HugeGraphServer发送gremlin语句(POST),同步执行</h4>
<h5 id="method--url-1">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/gremlin
-</code></pre><p><strong>查询顶点</strong></p>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/gremlin
+</span></span></code></pre></div><p><strong>查询顶点</strong></p>
<h5 id="request-body">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"gremlin"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"hugegraph.traversal().V('1:marko')"</span><span style="color:#000;font-weight:bold">,</span>
@@ -5580,8 +5585,8 @@ store=hugegraph
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="813-向hugegraphserver发送gremlin语句post异步执行">8.1.3 向HugeGraphServer发送gremlin语句(POST),异步执行</h4>
<h5 id="method--url-2">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/jobs/gremlin
-</code></pre><p><strong>查询顶点</strong></p>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/jobs/gremlin
+</span></span></code></pre></div><p><strong>查询顶点</strong></p>
<h5 id="request-body-2">Request Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"gremlin"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"g.V('1:marko')"</span><span style="color:#000;font-weight:bold">,</span>
@@ -5657,8 +5662,8 @@ city: Beijing})<br>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_email"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"123@xx.com"</span>
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/auth/users
-</code></pre><h5 id="response-status">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/auth/users
+</span></span></code></pre></div><h5 id="response-status">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">201</span>
</span></span></code></pre></div><h5 id="response-body">Response Body</h5>
<p>返回报文中,密码为加密后的密文</p>
@@ -5678,8 +5683,8 @@ city: Beijing})<br>
<li>id: 需要删除的用户 Id</li>
</ul>
<h5 id="method--url-1">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/auth/users/-63:test
-</code></pre><h5 id="response-status-1">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/auth/users/-63:test
+</span></span></code></pre></div><h5 id="response-status-1">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div><h5 id="response-body-1">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">1</span>
@@ -5689,8 +5694,8 @@ city: Beijing})<br>
<li>id: 需要修改的用户 Id</li>
</ul>
<h5 id="method--url-2">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test
-</code></pre><h5 id="request-body-1">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test
+</span></span></code></pre></div><h5 id="request-body-1">Request Body</h5>
<p>修改user_name、user_password和user_phone</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"user_name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"test"</span><span style="color:#000;font-weight:bold">,</span>
@@ -5716,8 +5721,8 @@ city: Beijing})<br>
<li>limit: 返回结果条数的上限</li>
</ul>
<h5 id="method--url-3">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/auth/users
-</code></pre><h5 id="response-status-3">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/auth/users
+</span></span></code></pre></div><h5 id="response-status-3">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-3">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5738,8 +5743,8 @@ city: Beijing})<br>
<li>id: 需要查询的用户 Id</li>
</ul>
<h5 id="method--url-4">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/auth/users/-63:admin
-</code></pre><h5 id="response-status-4">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/auth/users/-63:admin
+</span></span></code></pre></div><h5 id="response-status-4">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-4">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5756,8 +5761,8 @@ city: Beijing})<br>
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h4 id="926-查询某个用户的角色">9.2.6 查询某个用户的角色</h4>
<h5 id="method--url-5">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/auth/users/-63:boss/role
-</code></pre><h5 id="response-status-5">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/auth/users/-63:boss/role
+</span></span></code></pre></div><h5 id="response-status-5">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-5">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5788,8 +5793,8 @@ city: Beijing})<br>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"group_description"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"group can do anything"</span>
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h5 id="method--url-6">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/auth/groups
-</code></pre><h5 id="response-status-6">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/auth/groups
+</span></span></code></pre></div><h5 id="response-status-6">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">201</span>
</span></span></code></pre></div><h5 id="response-body-6">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5806,8 +5811,8 @@ city: Beijing})<br>
<li>id: 需要删除的用户组 Id</li>
</ul>
<h5 id="method--url-7">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
-</code></pre><h5 id="response-status-7">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
+</span></span></code></pre></div><h5 id="response-status-7">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div><h5 id="response-body-7">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">1</span>
@@ -5817,8 +5822,8 @@ city: Beijing})<br>
<li>id: 需要修改的用户组 Id</li>
</ul>
<h5 id="method--url-8">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
-</code></pre><h5 id="request-body-3">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
+</span></span></code></pre></div><h5 id="request-body-3">Request Body</h5>
<p>修改group_description</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"group_name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"grant"</span><span style="color:#000;font-weight:bold">,</span>
@@ -5842,8 +5847,8 @@ city: Beijing})<br>
<li>limit: 返回结果条数的上限</li>
</ul>
<h5 id="method--url-9">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/auth/groups
-</code></pre><h5 id="response-status-9">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/auth/groups
+</span></span></code></pre></div><h5 id="response-status-9">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-9">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5864,8 +5869,8 @@ city: Beijing})<br>
<li>id: 需要查询的用户组 Id</li>
</ul>
<h5 id="method--url-10">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all
-</code></pre><h5 id="response-status-10">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all
+</span></span></code></pre></div><h5 id="response-status-10">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-10">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5910,8 +5915,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
</span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">]</span>
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h5 id="method--url-11">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/auth/targets
-</code></pre><h5 id="response-status-11">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/auth/targets
+</span></span></code></pre></div><h5 id="response-status-11">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">201</span>
</span></span></code></pre></div><h5 id="response-body-11">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -5936,8 +5941,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<li>id: 需要删除的资源 Id</li>
</ul>
<h5 id="method--url-12">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
-</code></pre><h5 id="response-status-12">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
+</span></span></code></pre></div><h5 id="response-status-12">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div><h5 id="response-body-12">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">1</span>
@@ -5947,8 +5952,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<li>id: 需要修改的资源 Id</li>
</ul>
<h5 id="method--url-13">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
-</code></pre><h5 id="request-body-5">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
+</span></span></code></pre></div><h5 id="request-body-5">Request Body</h5>
<p>修改资源定义中的type</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"target_name"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"gremlin"</span><span style="color:#000;font-weight:bold">,</span>
@@ -5986,8 +5991,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<li>limit: 返回结果条数的上限</li>
</ul>
<h5 id="method--url-14">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/auth/targets
-</code></pre><h5 id="response-status-14">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/auth/targets
+</span></span></code></pre></div><h5 id="response-status-14">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-14">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -6032,8 +6037,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<li>id: 需要查询的资源 Id</li>
</ul>
<h5 id="method--url-15">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/auth/targets/-77:grant
-</code></pre><h5 id="response-status-15">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/auth/targets/-77:grant
+</span></span></code></pre></div><h5 id="response-status-15">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-15">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -6068,8 +6073,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"group"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"-69:all"</span>
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h5 id="method--url-16">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/auth/belongs
-</code></pre><h5 id="response-status-16">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/auth/belongs
+</span></span></code></pre></div><h5 id="response-status-16">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">201</span>
</span></span></code></pre></div><h5 id="response-body-16">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -6086,8 +6091,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<li>id: 需要删除的关联角色 Id</li>
</ul>
<h5 id="method--url-17">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
-</code></pre><h5 id="response-status-17">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
+</span></span></code></pre></div><h5 id="response-status-17">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div><h5 id="response-body-17">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">1</span>
@@ -6098,8 +6103,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<li>id: 需要修改的关联角色 Id</li>
</ul>
<h5 id="method--url-18">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
-</code></pre><h5 id="request-body-7">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
+</span></span></code></pre></div><h5 id="request-body-7">Request Body</h5>
<p>修改belong_description</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"belong_description"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"update test"</span>
@@ -6123,8 +6128,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<li>limit: 返回结果条数的上限</li>
</ul>
<h5 id="method--url-19">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/auth/belongs
-</code></pre><h5 id="response-status-19">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/auth/belongs
+</span></span></code></pre></div><h5 id="response-status-19">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-19">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -6145,8 +6150,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<li>id: 需要查询的关联角色 Id</li>
</ul>
<h5 id="method--url-20">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all
-</code></pre><h5 id="response-status-20">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all
+</span></span></code></pre></div><h5 id="response-status-20">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-20">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -6182,8 +6187,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"access_permission"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"READ"</span>
</span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span>
</span></span></code></pre></div><h5 id="method--url-21">Method & Url</h5>
-<pre tabindex="0"><code>POST http://localhost:8080/graphs/hugegraph/auth/accesses
-</code></pre><h5 id="response-status-21">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>POST http://localhost:8080/graphs/hugegraph/auth/accesses
+</span></span></code></pre></div><h5 id="response-status-21">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">201</span>
</span></span></code></pre></div><h5 id="response-body-21">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -6201,8 +6206,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<li>id: 需要删除的赋权 Id</li>
</ul>
<h5 id="method--url-22">Method & Url</h5>
-<pre tabindex="0"><code>DELETE http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
-</code></pre><h5 id="response-status-22">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>DELETE http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
+</span></span></code></pre></div><h5 id="response-status-22">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">204</span>
</span></span></code></pre></div><h5 id="response-body-22">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">1</span>
@@ -6213,8 +6218,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<li>id: 需要修改的赋权 Id</li>
</ul>
<h5 id="method--url-23">Method & Url</h5>
-<pre tabindex="0"><code>PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
-</code></pre><h5 id="request-body-9">Request Body</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
+</span></span></code></pre></div><h5 id="request-body-9">Request Body</h5>
<p>修改access_description</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
</span></span><span style="display:flex;"><span> <span style="color:#204a87;font-weight:bold">"access_description"</span><span style="color:#000;font-weight:bold">:</span> <span style="color:#4e9a06">"test"</span>
@@ -6239,8 +6244,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<li>limit: 返回结果条数的上限</li>
</ul>
<h5 id="method--url-24">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/auth/accesses
-</code></pre><h5 id="response-status-24">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/auth/accesses
+</span></span></code></pre></div><h5 id="response-status-24">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-24">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -6262,8 +6267,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<li>id: 需要查询的赋权 Id</li>
</ul>
<h5 id="method--url-25">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>11>S-77:all
-</code></pre><h5 id="response-status-25">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>11>S-77:all
+</span></span></code></pre></div><h5 id="response-status-25">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body-25">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
@@ -6279,8 +6284,8 @@ P.gte(18)),properties如果为null表示任意属性均可,如果属性名
<h3 id="101-other">10.1 Other</h3>
<h4 id="1011-查看hugegraph的版本信息">10.1.1 查看HugeGraph的版本信息</h4>
<h5 id="method--url">Method & Url</h5>
-<pre tabindex="0"><code>GET http://localhost:8080/versions
-</code></pre><h5 id="response-status">Response Status</h5>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>GET http://localhost:8080/versions
+</span></span></code></pre></div><h5 id="response-status">Response Status</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#0000cf;font-weight:bold">200</span>
</span></span></code></pre></div><h5 id="response-body">Response Body</h5>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span>
diff --git a/cn/docs/clients/restful-api/indexlabel/index.html b/cn/docs/clients/restful-api/indexlabel/index.html
index e97069c4f..b5a2a8310 100644
--- a/cn/docs/clients/restful-api/indexlabel/index.html
+++ b/cn/docs/clients/restful-api/indexlabel/index.html
@@ -11,8 +11,8 @@
Create child page
Create documentation issue
Create project issue
- Print entire section
IndexLabel API
1.5 IndexLabel
假设已经创建好了1.1.3中的 PropertyKeys 、1.2.3中的 VertexLabels 以及 1.3.3中的 EdgeLabels
1.5.1 创建一个IndexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/indexlabels
-
Request Body
IndexLabel API
1.5 IndexLabel
假设已经创建好了1.1.3中的 PropertyKeys 、1.2.3中的 VertexLabels 以及 1.3.3中的 EdgeLabels
1.5.1 创建一个IndexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/indexlabels
+
Request Body
{
"name": "personByCity",
"base_type": "VERTEX_LABEL",
"base_value": "person",
@@ -35,8 +35,8 @@
},
"task_id": 2
}
-
1.5.2 获取所有的IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels
-
Response Status
200
+
1.5.2 获取所有的IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels
+
Response Status
200
Response Body
{
"indexlabels": [
{
@@ -82,8 +82,8 @@
}
]
}
-
1.5.3 根据name获取IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
-
Response Status
200
+
1.5.3 根据name获取IndexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
+
Response Status
200
Response Body
{
"id": 1,
"base_type": "VERTEX_LABEL",
@@ -94,8 +94,8 @@
],
"index_type": "SECONDARY"
}
-
1.5.4 根据name删除IndexLabel
删除 IndexLabel 会导致删除相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
-
Response Status
202
+
1.5.4 根据name删除IndexLabel
删除 IndexLabel 会导致删除相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
+
Response Status
202
Response Body
{
"task_id": 1
}
diff --git a/cn/docs/clients/restful-api/other/index.html b/cn/docs/clients/restful-api/other/index.html
index f7d355208..e3e16c6b7 100644
--- a/cn/docs/clients/restful-api/other/index.html
+++ b/cn/docs/clients/restful-api/other/index.html
@@ -12,8 +12,8 @@
Create child page
Create documentation issue
Create project issue
- Print entire section
Other API
10.1 Other
10.1.1 查看HugeGraph的版本信息
Method & Url
GET http://localhost:8080/versions
-
Response Status
200
+ Print entire section
Other API
10.1 Other
10.1.1 查看HugeGraph的版本信息
Method & Url
GET http://localhost:8080/versions
+
Response Status
200
Response Body
{
"versions": {
"version": "v1",
diff --git a/cn/docs/clients/restful-api/propertykey/index.html b/cn/docs/clients/restful-api/propertykey/index.html
index f5294ef4f..c35b1eb69 100644
--- a/cn/docs/clients/restful-api/propertykey/index.html
+++ b/cn/docs/clients/restful-api/propertykey/index.html
@@ -5,9 +5,9 @@
data_type:属性类型数据类型,包括:bool、byte、int、long、float、double、string、date、uuid、blob,默认string类型
cardinality:属性类型基数,包 …">
PropertyKey API
1.2 PropertyKey
Params说明:
- name:属性类型名称,必填
- data_type:属性类型数据类型,包括:bool、byte、int、long、float、double、string、date、uuid、blob,默认string类型
- cardinality:属性类型基数,包括:single、list、set,默认single
请求体字段说明:
- id:属性类型id值
- properties:属性的属性,对于属性而言,此项为空
- user_data:设置属性类型的通用信息,比如可设置age属性的取值范围,最小为0,最大为100;目前此项不做任何校验,只为后期拓展提供预留入口
1.2.1 创建一个 PropertyKey
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/propertykeys
-
Request Body
PropertyKey API
1.2 PropertyKey
Params说明:
- name:属性类型名称,必填
- data_type:属性类型数据类型,包括:bool、byte、int、long、float、double、string、date、uuid、blob,默认string类型
- cardinality:属性类型基数,包括:single、list、set,默认single
请求体字段说明:
- id:属性类型id值
- properties:属性的属性,对于属性而言,此项为空
- user_data:设置属性类型的通用信息,比如可设置age属性的取值范围,最小为0,最大为100;目前此项不做任何校验,只为后期拓展提供预留入口
1.2.1 创建一个 PropertyKey
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/propertykeys
+
Request Body
{
"name": "age",
"data_type": "INT",
"cardinality": "SINGLE"
@@ -38,8 +38,8 @@
},
"task_id": 0
}
-
1.2.2 为已存在的 PropertyKey 添加或移除 userdata
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/propertykeys/age?action=append
-
Request Body
{
+
1.2.2 为已存在的 PropertyKey 添加或移除 userdata
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/propertykeys/age?action=append
+
Request Body
{
"name": "age",
"user_data": {
"min": 0,
@@ -65,8 +65,8 @@
},
"task_id": 0
}
-
1.2.3 获取所有的 PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys
-
Response Status
200
+
1.2.3 获取所有的 PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys
+
Response Status
200
Response Body
{
"propertykeys": [
{
@@ -127,8 +127,8 @@
}
]
}
-
1.2.4 根据name获取PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
-
其中,age
为要获取的PropertyKey的名字
Response Status
200
+
1.2.4 根据name获取PropertyKey
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
+
其中,age
为要获取的 PropertyKey 的名称
Response Status
200
Response Body
{
"id": 1,
"name": "age",
@@ -144,12 +144,12 @@
"~create_time": "2022-05-13 13:47:23.745"
}
}
-
1.2.5 根据name删除PropertyKey
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
-
其中,age
为要获取的PropertyKey的名字
Response Status
202
+
1.2.5 根据 name 删除 PropertyKey
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
+
其中,age
为要删除的 PropertyKey 的名称
Response Status
202
Response Body
{
"task_id" : 0
}
-
Last modified May 12, 2022: fix: bad request body simple in propertykey.md (1c933ca)
+
Last modified May 19, 2023: Update propertykey.md (#240) (b5fb8fb)
diff --git a/cn/docs/clients/restful-api/rank/index.html b/cn/docs/clients/restful-api/rank/index.html
index 07f0601c9..2e904b48f 100644
--- a/cn/docs/clients/restful-api/rank/index.html
+++ b/cn/docs/clients/restful-api/rank/index.html
@@ -98,8 +98,8 @@
}
]
}
-注意将映射文件中input.path
的值修改为自己本地的路径。
4.2.1.1 功能介绍
适用于二分图,给出所有源顶点相关的其他顶点及其相关性组成的列表。
二分图:也称二部图,是图论里的一种特殊模型,也是一种特殊的网络流。其最大的特点在于,可以将图里的顶点分为两个集合,两个集合之间的点有边相连,但集合内的点之间没有直接关联。
假设有一个用户和物品的二分图,基于随机游走的 PersonalRank 算法步骤如下:
- 选定一个起点用户 u,其初始权重为 1.0,从 Vu 开始游走(有 alpha 的概率走到邻居点,1 - alpha 的概率停留);
- 如果决定向外游走, 那么会选取某一个类型的出边, 例如
rating
来查找共同的打分人:- 那就从当前节点的邻居节点中按照均匀分布随机选择一个,并且按照均匀分布划分权重值;
- 给源顶点补偿权重 1 - alpha;
- 重复步骤2;
- 达到一定步数或达到精度后收敛,得到推荐列表。
Params
必填项:
- source: 源顶点 id
- label: 源点出发的某类边 label,须连接两类不同顶点
选填项:
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,取值区间为 (0, 1], 默认值
0.85
- max_degree: 查询过程中,单个顶点遍历的最大邻接边数目,默认为
10000
- max_depth: 迭代次数,取值区间为 [2, 50], 默认值
5
- with_label:筛选结果中保留哪些结果,可选以下三类, 默认为
BOTH_LABEL
- SAME_LABEL:仅保留与源顶点相同类别的顶点
- OTHER_LABEL:仅保留与源顶点不同类别(二分图的另一端)的顶点
- BOTH_LABEL:同时保留与源顶点相同和相反类别的顶点
- limit: 返回的顶点的最大数目,默认为
100
- max_diff: 提前收敛的精度差, 默认为
0.0001
(后续实现) - sorted:返回的结果是否根据 rank 排序,为 true 时降序排列,反之不排序,默认为
true
4.2.1.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
-
Request Body
{
+
注意将映射文件中input.path
的值修改为自己本地的路径。
4.2.1.1 功能介绍
适用于二分图,给出所有源顶点相关的其他顶点及其相关性组成的列表。
二分图:也称二部图,是图论里的一种特殊模型,也是一种特殊的网络流。其最大的特点在于,可以将图里的顶点分为两个集合,两个集合之间的点有边相连,但集合内的点之间没有直接关联。
假设有一个用户和物品的二分图,基于随机游走的 PersonalRank 算法步骤如下:
- 选定一个起点用户 u,其初始权重为 1.0,从 Vu 开始游走(有 alpha 的概率走到邻居点,1 - alpha 的概率停留);
- 如果决定向外游走, 那么会选取某一个类型的出边, 例如
rating
来查找共同的打分人:- 那就从当前节点的邻居节点中按照均匀分布随机选择一个,并且按照均匀分布划分权重值;
- 给源顶点补偿权重 1 - alpha;
- 重复步骤2;
- 达到一定步数或达到精度后收敛,得到推荐列表。
Params
必填项:
- source: 源顶点 id
- label: 源点出发的某类边 label,须连接两类不同顶点
选填项:
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,取值区间为 (0, 1], 默认值
0.85
- max_degree: 查询过程中,单个顶点遍历的最大邻接边数目,默认为
10000
- max_depth: 迭代次数,取值区间为 [2, 50], 默认值
5
- with_label:筛选结果中保留哪些结果,可选以下三类, 默认为
BOTH_LABEL
- SAME_LABEL:仅保留与源顶点相同类别的顶点
- OTHER_LABEL:仅保留与源顶点不同类别(二分图的另一端)的顶点
- BOTH_LABEL:同时保留与源顶点相同和相反类别的顶点
- limit: 返回的顶点的最大数目,默认为
100
- max_diff: 提前收敛的精度差, 默认为
0.0001
(后续实现) - sorted:返回的结果是否根据 rank 排序,为 true 时降序排列,反之不排序,默认为
true
4.2.1.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
+
Request Body
{
"source": "1:1",
"label": "rating",
"alpha": 0.6,
@@ -201,8 +201,8 @@
}
}
4.2.2.1 功能介绍
在一般图结构中,找出每一层与给定起点相关性最高的前 N 个顶点及其相关度,用图的语义理解就是:从起点往外走,
-走到各层各个顶点的概率。
Params
- source: 源顶点 id,必填项
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,必填项,取值区间为 (0, 1]
- steps: 表示从起始顶点走过的路径规则,是一组 Step 的列表,每个 Step 对应结果中的一层,必填项。每个 Step 的结构如下:
- direction:表示边的方向(OUT, IN, BOTH),默认是 BOTH
- labels:边的类型列表,多个边类型取并集
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- top:在结果中每一层只保留权重最高的前 N 个结果,默认为 100,最大值为 1000
- capacity: 遍历过程中最大的访问的顶点数目,选填项,默认为10000000
4.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
-
Request Body
{
+走到各层各个顶点的概率。Params
- source: 源顶点 id,必填项
- alpha:每轮迭代时从某个点往外走的概率,与 PageRank 算法中的 alpha 类似,必填项,取值区间为 (0, 1]
- steps: 表示从起始顶点走过的路径规则,是一组 Step 的列表,每个 Step 对应结果中的一层,必填项。每个 Step 的结构如下:
- direction:表示边的方向(OUT, IN, BOTH),默认是 BOTH
- labels:边的类型列表,多个边类型取并集
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- top:在结果中每一层只保留权重最高的前 N 个结果,默认为 100,最大值为 1000
- capacity: 遍历过程中最大的访问的顶点数目,选填项,默认为10000000
4.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
+
Request Body
{
"source":"O",
"steps":[
{
diff --git a/cn/docs/clients/restful-api/rebuild/index.html b/cn/docs/clients/restful-api/rebuild/index.html
index 01e9f438c..8a8dc51fc 100644
--- a/cn/docs/clients/restful-api/rebuild/index.html
+++ b/cn/docs/clients/restful-api/rebuild/index.html
@@ -24,18 +24,18 @@
Create child page
Create documentation issue
Create project issue
- Print entire section
Rebuild API
1.6 Rebuild
1.6.1 重建IndexLabel
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/indexlabels/personByCity
-
Response Status
202
+ Print entire section
Rebuild API
1.6 Rebuild
1.6.1 重建IndexLabel
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/indexlabels/personByCity
+
Response Status
202
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.2 VertexLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/vertexlabels/person
-
Response Status
202
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.2 VertexLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/vertexlabels/person
+
Response Status
202
Response Body
{
"task_id": 2
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2
(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.3 EdgeLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/edgelabels/created
-
Response Status
202
+
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2
(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
1.6.3 EdgeLabel对应的全部索引重建
Method & Url
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/edgelabels/created
+
Response Status
202
Response Body
{
"task_id": 3
}
diff --git a/cn/docs/clients/restful-api/schema/index.html b/cn/docs/clients/restful-api/schema/index.html
index 74fa9e1c0..62e1bfb8a 100644
--- a/cn/docs/clients/restful-api/schema/index.html
+++ b/cn/docs/clients/restful-api/schema/index.html
@@ -2,30 +2,31 @@
HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。
Method & Url
GET …">
+Method & Url GET http://localhost:8080/graphs/{graph_name}/schema e.g: GET http://localhost:8080/graphs/hugegraph/schema Response Status 200 Response Body { "propertykeys": [ { "id": 7, "name": "price", "data_type": "DOUBLE", "cardinality": "SINGLE", "aggregate_type": "NONE", "write_type": "OLTP", "properties": [], "status": "CREATED", "user_data": { "~create_time": "2023-05-08 17:49:05.316" } }, { "id": 6, "name": "date", "data_type": "TEXT", "cardinality": "SINGLE", "aggregate_type": "NONE", "write_type": "OLTP", "properties": [], "status": "CREATED", "user_data": { "~create_time": "2023-05-08 17:49:05.309" } }, { "id": 3, "name": "city", "data_type": "TEXT", "cardinality": "SINGLE", "aggregate_type": "NONE", "write_type": "OLTP", "properties": [], "status": "CREATED", "user_data": { "~create_time": "2023-05-08 17:49:05.">
Schema API
1.1 Schema
HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema
-
Response Status
200
+ Print entire section
Schema API
1.1 Schema
HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。
Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
+
+e.g: GET http://localhost:8080/graphs/hugegraph/schema
+
Response Status
200
Response Body
{
"propertykeys": [
{
"id": 7,
"name": "price",
- "data_type": "INT",
+ "data_type": "DOUBLE",
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.741"
+ "~create_time": "2023-05-08 17:49:05.316"
}
},
{
@@ -35,11 +36,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.729"
+ "~create_time": "2023-05-08 17:49:05.309"
}
},
{
@@ -49,11 +49,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.691"
+ "~create_time": "2023-05-08 17:49:05.287"
}
},
{
@@ -63,11 +62,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.678"
+ "~create_time": "2023-05-08 17:49:05.280"
}
},
{
@@ -77,11 +75,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.718"
+ "~create_time": "2023-05-08 17:49:05.301"
}
},
{
@@ -91,11 +88,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.707"
+ "~create_time": "2023-05-08 17:49:05.294"
}
},
{
@@ -105,11 +101,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.609"
+ "~create_time": "2023-05-08 17:49:05.250"
}
}
],
@@ -122,9 +117,11 @@
"name"
],
"nullable_keys": [
- "age"
+ "age",
+ "city"
],
"index_labels": [
+ "personByAge",
"personByCity",
"personByAgeAndCity"
],
@@ -137,19 +134,15 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:40.783"
+ "~create_time": "2023-05-08 17:49:05.336"
}
},
{
"id": 2,
"name": "software",
- "id_strategy": "PRIMARY_KEY",
- "primary_keys": [
- "name"
- ],
- "nullable_keys": [
- "price"
- ],
+ "id_strategy": "CUSTOMIZE_NUMBER",
+ "primary_keys": [],
+ "nullable_keys": [],
"index_labels": [
"softwareByPrice"
],
@@ -162,7 +155,7 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:40.840"
+ "~create_time": "2023-05-08 17:49:05.347"
}
}
],
@@ -172,13 +165,9 @@
"name": "knows",
"source_label": "person",
"target_label": "person",
- "frequency": "MULTIPLE",
- "sort_keys": [
- "date"
- ],
- "nullable_keys": [
- "weight"
- ],
+ "frequency": "SINGLE",
+ "sort_keys": [],
+ "nullable_keys": [],
"index_labels": [
"knowsByWeight"
],
@@ -190,7 +179,7 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:41.840"
+ "~create_time": "2023-05-08 17:49:08.437"
}
},
{
@@ -199,11 +188,8 @@
"source_label": "person",
"target_label": "software",
"frequency": "SINGLE",
- "sort_keys": [
- ],
- "nullable_keys": [
- "weight"
- ],
+ "sort_keys": [],
+ "nullable_keys": [],
"index_labels": [
"createdByDate",
"createdByWeight"
@@ -216,13 +202,27 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:41.868"
+ "~create_time": "2023-05-08 17:49:08.446"
}
}
],
"indexlabels": [
{
"id": 1,
+ "name": "personByAge",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "person",
+ "index_type": "RANGE_INT",
+ "fields": [
+ "age"
+ ],
+ "status": "CREATED",
+ "user_data": {
+ "~create_time": "2023-05-08 17:49:05.375"
+ }
+ },
+ {
+ "id": 2,
"name": "personByCity",
"base_type": "VERTEX_LABEL",
"base_value": "person",
@@ -232,68 +232,68 @@
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.886"
+ "~create_time": "2023-05-08 17:49:06.898"
}
},
{
- "id": 4,
- "name": "createdByDate",
- "base_type": "EDGE_LABEL",
- "base_value": "created",
+ "id": 3,
+ "name": "personByAgeAndCity",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "person",
"index_type": "SECONDARY",
"fields": [
- "date"
+ "age",
+ "city"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.878"
+ "~create_time": "2023-05-08 17:49:07.407"
}
},
{
- "id": 5,
- "name": "createdByWeight",
- "base_type": "EDGE_LABEL",
- "base_value": "created",
+ "id": 4,
+ "name": "softwareByPrice",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "software",
"index_type": "RANGE_DOUBLE",
"fields": [
- "weight"
+ "price"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:42.117"
+ "~create_time": "2023-05-08 17:49:07.916"
}
},
{
- "id": 2,
- "name": "personByAgeAndCity",
- "base_type": "VERTEX_LABEL",
- "base_value": "person",
+ "id": 5,
+ "name": "createdByDate",
+ "base_type": "EDGE_LABEL",
+ "base_value": "created",
"index_type": "SECONDARY",
"fields": [
- "age",
- "city"
+ "date"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.351"
+ "~create_time": "2023-05-08 17:49:08.454"
}
},
{
- "id": 3,
- "name": "softwareByPrice",
- "base_type": "VERTEX_LABEL",
- "base_value": "software",
- "index_type": "RANGE_INT",
+ "id": 6,
+ "name": "createdByWeight",
+ "base_type": "EDGE_LABEL",
+ "base_value": "created",
+ "index_type": "RANGE_DOUBLE",
"fields": [
- "price"
+ "weight"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.587"
+ "~create_time": "2023-05-08 17:49:08.963"
}
},
{
- "id": 6,
+ "id": 7,
"name": "knowsByWeight",
"base_type": "EDGE_LABEL",
"base_value": "knows",
@@ -303,12 +303,12 @@
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:42.376"
+ "~create_time": "2023-05-08 17:49:09.473"
}
}
]
}
-
Last modified April 17, 2022: rebuild doc (ef36544)
+
Last modified May 14, 2023: docs: modify and translate schema-api (#214) (9c794f6)
diff --git a/cn/docs/clients/restful-api/task/index.html b/cn/docs/clients/restful-api/task/index.html
index c51a691ca..d2d3965f0 100644
--- a/cn/docs/clients/restful-api/task/index.html
+++ b/cn/docs/clients/restful-api/task/index.html
@@ -12,8 +12,8 @@
Create child page
Create documentation issue
Create project issue
- Print entire sectionTask API
7.1 Task
7.1.1 列出某个图中全部的异步任务
Params
- status: 异步任务的状态
- limit:返回异步任务数目上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks?status=success
-
Response Status
200
+ Print entire section
Task API
7.1 Task
7.1.1 列出某个图中全部的异步任务
Params
- status: 异步任务的状态
- limit:返回异步任务数目上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks?status=success
+
Response Status
200
Response Body
{
"tasks": [{
"task_name": "hugegraph.traversal().V()",
@@ -29,8 +29,8 @@
"task_input": "{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}]
}
-
7.1.2 查看某个异步任务的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks/2
-
Response Status
200
+
7.1.2 查看某个异步任务的信息
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks/2
+
Response Status
200
Response Body
{
"task_name": "hugegraph.traversal().V()",
"task_progress": 0,
@@ -44,8 +44,8 @@
"task_callable": "com.baidu.hugegraph.api.job.GremlinAPI$GremlinJob",
"task_input": "{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}
-
7.1.3 删除某个异步任务信息,不删除异步任务本身
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/tasks/2
-
Response Status
204
+
7.1.3 删除某个异步任务信息,不删除异步任务本身
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/tasks/2
+
Response Status
204
7.1.4 取消某个异步任务,该异步任务必须具有处理中断的能力
假设已经通过Gremlin API创建了一个异步任务如下:
"for (int i = 0; i < 10; i++) {" +
"hugegraph.addVertex(T.label, 'man');" +
"hugegraph.tx().commit();" +
@@ -55,8 +55,8 @@
"break;" +
"}" +
"}"
-
Method & Url
PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
-
请保证在10秒内发送该请求,如果超过10秒发送,任务可能已经执行完成,无法取消。
Response Status
202
+
Method & Url
PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
+
请保证在10秒内发送该请求,如果超过10秒发送,任务可能已经执行完成,无法取消。
Response Status
202
Response Body
{
"cancelled": true
}
diff --git a/cn/docs/clients/restful-api/traverser/index.html b/cn/docs/clients/restful-api/traverser/index.html
index fa9260112..6a3a90afe 100644
--- a/cn/docs/clients/restful-api/traverser/index.html
+++ b/cn/docs/clients/restful-api/traverser/index.html
@@ -118,28 +118,28 @@
peter.addEdge("created", lop, "date", "20170324", "weight", 0.2);
}
}
-
顶点ID为:
"2:ripple",
-"1:vadas",
-"1:peter",
-"1:josh",
-"1:marko",
-"2:lop"
-
边ID为:
"S1:peter>2>>S2:lop",
-"S1:josh>2>>S2:lop",
-"S1:josh>2>>S2:ripple",
-"S1:marko>1>20130220>S1:josh",
-"S1:marko>1>20160110>S1:vadas",
-"S1:marko>2>>S2:lop"
-
3.2.1 K-out API(GET,基础版)
3.2.1.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找从起始顶点出发恰好depth步可达的顶点
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.1.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kout?source="1:marko"&max_depth=2
-
Response Status
200
+
顶点ID为:
"2:ripple",
+"1:vadas",
+"1:peter",
+"1:josh",
+"1:marko",
+"2:lop"
+
边ID为:
"S1:peter>2>>S2:lop",
+"S1:josh>2>>S2:lop",
+"S1:josh>2>>S2:ripple",
+"S1:marko>1>20130220>S1:josh",
+"S1:marko>1>20160110>S1:vadas",
+"S1:marko>2>>S2:lop"
+
3.2.1 K-out API(GET,基础版)
3.2.1.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找从起始顶点出发恰好depth步可达的顶点
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.1.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kout?source="1:marko"&max_depth=2
+
Response Status
200
Response Body
{
"vertices":[
"2:ripple",
"1:peter"
]
}
-
3.2.1.3 适用场景
查找恰好N步关系可达的顶点。两个例子:
- 家族关系中,查找一个人的所有孙子,person A通过连续的两条“儿子”边到达的顶点集合。
- 社交关系中发现潜在好友,例如:与目标用户相隔两层朋友关系的用户,可以通过连续两条“朋友”边到达的顶点。
3.2.2 K-out API(POST,高级版)
3.2.2.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发恰好depth步可达的顶点。
与K-out基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kout
-
Request Body
{
+
3.2.1.3 适用场景
查找恰好N步关系可达的顶点。两个例子:
- 家族关系中,查找一个人的所有孙子,person A通过连续的两条“儿子”边到达的顶点集合。
- 社交关系中发现潜在好友,例如:与目标用户相隔两层朋友关系的用户,可以通过连续两条“朋友”边到达的顶点。
3.2.2 K-out API(POST,高级版)
3.2.2.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发恰好depth步可达的顶点。
与K-out基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.2.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kout
+
Request Body
{
"source": "1:marko",
"step": {
"direction": "BOTH",
@@ -227,8 +227,8 @@
}
]
}
-
3.2.2.3 适用场景
参见3.2.1.3
3.2.3 K-neighbor(GET,基础版)
3.2.3.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找包括起始顶点在内、depth步之内可达的所有顶点
相当于:起始顶点、K-out(1)、K-out(2)、… 、K-out(max_depth)的并集
Params
- source: 起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的顶点的最大数目,也即遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.3.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kneighbor?source=“1:marko”&max_depth=2
-
Response Status
200
+
3.2.2.3 适用场景
参见3.2.1.3
3.2.3 K-neighbor(GET,基础版)
3.2.3.1 功能介绍
根据起始顶点、方向、边的类型(可选)和深度depth,查找包括起始顶点在内、depth步之内可达的所有顶点
相当于:起始顶点、K-out(1)、K-out(2)、… 、K-out(max_depth)的并集
Params
- source: 起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的顶点的最大数目,也即遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.3.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/kneighbor?source=“1:marko”&max_depth=2
+
Response Status
200
Response Body
{
"vertices":[
"2:ripple",
@@ -239,8 +239,8 @@
"2:lop"
]
}
-
3.2.3.3 适用场景
查找N步以内可达的所有顶点,例如:
- 家族关系中,查找一个人五服以内所有子孙,person A通过连续的5条“亲子”边到达的顶点集合。
- 社交关系中发现好友圈子,例如目标用户通过1条、2条、3条“朋友”边可到达的用户可以组成目标用户的朋友圈子
3.2.4 K-neighbor API(POST,高级版)
3.2.4.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发depth步内可达的所有顶点。
与K-neighbor基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.4.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kneighbor
-
Request Body
{
+
3.2.3.3 适用场景
查找N步以内可达的所有顶点,例如:
- 家族关系中,查找一个人五服以内所有子孙,person A通过连续的5条“亲子”边到达的顶点集合。
- 社交关系中发现好友圈子,例如目标用户通过1条、2条、3条“朋友”边可到达的用户可以组成目标用户的朋友圈子
3.2.4 K-neighbor API(POST,高级版)
3.2.4.1 功能介绍
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发depth步内可达的所有顶点。
与K-neighbor基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
Params
- source:起始顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- count_only:Boolean值,true表示只统计结果的数目,不返回具体结果;false表示返回具体的结果,默认为false
- with_path:true表示返回起始点到每个邻居的最短路径,false表示不返回起始点到每个邻居的最短路径,选填项,默认为false
- with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有邻居的完整信息
- false时表示只返回顶点id
- limit:返回的顶点的最大数目,选填项,默认为10000000
3.2.4.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/kneighbor
+
Request Body
{
"source": "1:marko",
"step": {
"direction": "BOTH",
@@ -369,20 +369,20 @@
}
]
}
-
3.2.4.3 适用场景
参见3.2.3.3
3.2.5 Same Neighbors
3.2.5.1 功能介绍
查询两个点的共同邻居
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的共同邻居的最大数目,选填项,默认为10000000
3.2.5.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/sameneighbors?vertex=“1:marko”&other="1:josh"
-
Response Status
200
+
3.2.4.3 适用场景
参见3.2.3.3
3.2.5 Same Neighbors
3.2.5.1 功能介绍
查询两个点的共同邻居
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- limit:返回的共同邻居的最大数目,选填项,默认为10000000
3.2.5.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/sameneighbors?vertex=“1:marko”&other="1:josh"
+
Response Status
200
Response Body
{
"same_neighbors":[
"2:lop"
]
}
-
3.2.5.3 适用场景
查找两个顶点的共同邻居:
- 社交关系中发现两个用户的共同粉丝或者共同关注用户
3.2.6 Jaccard Similarity(GET)
3.2.6.1 功能介绍
计算两个顶点的jaccard similarity(两个顶点邻居的交集比上两个顶点邻居的并集)
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
3.2.6.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity?vertex="1:marko"&other="1:josh"
-
Response Status
200
+
3.2.5.3 适用场景
查找两个顶点的共同邻居:
- 社交关系中发现两个用户的共同粉丝或者共同关注用户
3.2.6 Jaccard Similarity(GET)
3.2.6.1 功能介绍
计算两个顶点的jaccard similarity(两个顶点邻居的交集比上两个顶点邻居的并集)
Params
- vertex:一个顶点id,必填项
- other:另一个顶点id,必填项
- direction:顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
3.2.6.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity?vertex="1:marko"&other="1:josh"
+
Response Status
200
Response Body
{
"jaccard_similarity": 0.2
}
-
3.2.6.3 适用场景
用于评估两个点的相似性或者紧密度
3.2.7 Jaccard Similarity(POST)
3.2.7.1 功能介绍
计算与指定顶点的jaccard similarity最大的N个点
jaccard similarity的计算方式为:两个顶点邻居的交集比上两个顶点邻居的并集
Params
- vertex:一个顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- top:返回一个起点的jaccard similarity中最大的top个,选填项,默认为100
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.7.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity
-
Request Body
{
+
3.2.6.3 适用场景
用于评估两个点的相似性或者紧密度
3.2.7 Jaccard Similarity(POST)
3.2.7.1 功能介绍
计算与指定顶点的jaccard similarity最大的N个点
jaccard similarity的计算方式为:两个顶点邻居的交集比上两个顶点邻居的并集
Params
- vertex:一个顶点id,必填项
- 从起始点出发的Step,必填项,结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- top:返回一个起点的jaccard similarity中最大的top个,选填项,默认为100
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.7.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity
+
Request Body
{
"vertex": "1:marko",
"step": {
"direction": "BOTH",
@@ -398,8 +398,8 @@
"1:peter": 0.3333333333333333,
"1:josh": 0.2
}
-
3.2.7.3 适用场景
用于在图中找出与指定顶点相似性最高的顶点
3.2.8 Shortest Path
3.2.8.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.8.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/shortestpath?source="1:marko"&target="2:ripple"&max_depth=3
-
Response Status
200
+
3.2.7.3 适用场景
用于在图中找出与指定顶点相似性最高的顶点
3.2.8 Shortest Path
3.2.8.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.8.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/shortestpath?source="1:marko"&target="2:ripple"&max_depth=3
+
Response Status
200
Response Body
{
"path":[
"1:marko",
@@ -407,8 +407,8 @@
"2:ripple"
]
}
-
3.2.8.3 适用场景
查找两个顶点间的最短路径,例如:
- 社交关系网中,查找两个用户有关系的最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备最短的关联关系
3.2.9 All Shortest Paths
3.2.9.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找两点间所有的最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.9.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/allshortestpaths?source="A"&target="Z"&max_depth=10
-
Response Status
200
+
3.2.8.3 适用场景
查找两个顶点间的最短路径,例如:
- 社交关系网中,查找两个用户有关系的最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备最短的关联关系
3.2.9 All Shortest Paths
3.2.9.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找两点间所有的最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- max_depth:最大步数,必填项
- label:边的类型,选填项,默认代表所有edge label
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
3.2.9.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/allshortestpaths?source="A"&target="Z"&max_depth=10
+
Response Status
200
Response Body
{
"paths":[
{
@@ -429,8 +429,8 @@
}
]
}
-
3.2.9.3 适用场景
查找两个顶点间的所有最短路径,例如:
- 社交关系网中,查找两个用户有关系的全部最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备全部的最短关联关系
3.2.10 Weighted Shortest Path
3.2.10.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条带权最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,必填项,必须是数字类型的属性
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.10.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/weightedshortestpath?source="1:marko"&target="2:ripple"&weight="weight"&with_vertex=true
-
Response Status
200
+
3.2.9.3 适用场景
查找两个顶点间的所有最短路径,例如:
- 社交关系网中,查找两个用户有关系的全部最短路径,即最近的朋友关系链
- 设备关联网络中,查找两个设备全部的最短关联关系
3.2.10 Weighted Shortest Path
3.2.10.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条带权最短路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,必填项,必须是数字类型的属性
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.10.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/weightedshortestpath?source="1:marko"&target="2:ripple"&weight="weight"&with_vertex=true
+
Response Status
200
Response Body
{
"path": {
"weight": 2.0,
@@ -473,8 +473,8 @@
}
]
}
-
3.2.10.3 适用场景
查找两个顶点间的带权最短路径,例如:
- 交通线路中查找从A城市到B城市花钱最少的交通方式
3.2.11 Single Source Shortest Path
3.2.11.1 功能介绍
从一个顶点出发,查找该点到图中其他顶点的最短路径(可选是否带权重)
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,选填项,必须是数字类型的属性,如果不填或者虽然填了但是边没有该属性,则权重为1.0
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:查询到的目标顶点个数,也是返回的最短路径的条数,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.11.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/singlesourceshortestpath?source="1:marko"&with_vertex=true
-
Response Status
200
+
3.2.10.3 适用场景
查找两个顶点间的带权最短路径,例如:
- 交通线路中查找从A城市到B城市花钱最少的交通方式
3.2.11 Single Source Shortest Path
3.2.11.1 功能介绍
从一个顶点出发,查找该点到图中其他顶点的最短路径(可选是否带权重)
Params
- source:起始顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- weight:边的权重属性,选填项,必须是数字类型的属性,如果不填或者虽然填了但是边没有该属性,则权重为1.0
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启) - capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:查询到的目标顶点个数,也是返回的最短路径的条数,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.11.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/singlesourceshortestpath?source="1:marko"&with_vertex=true
+
Response Status
200
Response Body
{
"paths": {
"2:ripple": {
@@ -578,8 +578,8 @@
}
]
}
-
3.2.11.3 适用场景
查找从一个点出发到其他顶点的带权最短路径,比如:
- 查找从北京出发到全国其他所有城市的耗时最短的乘车方案
3.2.12 Multi Node Shortest Path
3.2.12.1 功能介绍
查找指定顶点集两两之间的最短路径
Params
- vertices:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.12.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/multinodeshortestpath
-
Request Body
{
+
3.2.11.3 适用场景
查找从一个点出发到其他顶点的带权最短路径,比如:
- 查找从北京出发到全国其他所有城市的耗时最短的乘车方案
3.2.12 Multi Node Shortest Path
3.2.12.1 功能介绍
查找指定顶点集两两之间的最短路径
Params
- vertices:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.12.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/multinodeshortestpath
+
Request Body
{
"vertices": {
"ids": ["382:marko", "382:josh", "382:vadas", "382:peter", "383:lop", "383:ripple"]
},
@@ -761,8 +761,8 @@
}
]
}
-
3.2.12.3 适用场景
查找多个点之间的最短路径,比如:
- 查找多个公司和法人之间的最短路径
3.2.13 Paths (GET,基础版)
3.2.13.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找所有路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
3.2.13.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/paths?source="1:marko"&target="1:josh"&max_depth=5
-
Response Status
200
+
3.2.12.3 适用场景
查找多个点之间的最短路径,比如:
- 查找多个公司和法人之间的最短路径
3.2.13 Paths (GET,基础版)
3.2.13.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找所有路径
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
3.2.13.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/paths?source="1:marko"&target="1:josh"&max_depth=5
+
Response Status
200
Response Body
{
"paths":[
{
@@ -780,8 +780,8 @@
}
]
}
-
3.2.13.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.14 Paths (POST,高级版)
3.2.14.1 功能介绍
根据起始顶点、目的顶点、步骤(step)和最大深度等条件查找所有路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.14.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/paths
-
Request Body
{
+
3.2.13.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.14 Paths (POST,高级版)
3.2.14.1 功能介绍
根据起始顶点、目的顶点、步骤(step)和最大深度等条件查找所有路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- step:表示从起始顶点到终止顶点走过的路径,必填项,Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- max_depth:步数,必填项
- nearest:nearest为true时,代表起始顶点到达结果顶点的最短路径长度为depth,不存在更短的路径;nearest为false时,代表起始顶点到结果顶点有一条长度为depth的路径(未必最短且可以有环),选填项,默认为true
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.14.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/paths
+
Request Body
{
"sources": {
"ids": ["1:marko"]
},
@@ -819,8 +819,8 @@
}
]
}
-
3.2.14.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.15 Customized Paths
3.2.15.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- weight_by:根据指定的属性计算边的权重,sort_by不为NONE时有效,与default_weight互斥
- default_weight:当边没有属性作为权重计算值时,采取的默认权重,sort_by不为NONE时有效,与weight_by互斥
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- sample:当需要对某个step的符合条件的边进行采样时设置,-1表示不采样,默认为采样100
- sort_by:根据路径的权重排序,选填项,默认为NONE:
- NONE表示不排序,默认值
- INCR表示按照路径权重的升序排序
- DECR表示按照路径权重的降序排序
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.15.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedpaths
-
Request Body
{
+
3.2.14.3 适用场景
查找两个顶点间的所有路径,例如:
- 社交网络中,查找两个用户所有可能的关系路径
- 设备关联网络中,查找两个设备之间所有的关联路径
3.2.15 Customized Paths
3.2.15.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- weight_by:根据指定的属性计算边的权重,sort_by不为NONE时有效,与default_weight互斥
- default_weight:当边没有属性作为权重计算值时,采取的默认权重,sort_by不为NONE时有效,与weight_by互斥
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- sample:当需要对某个step的符合条件的边进行采样时设置,-1表示不采样,默认为采样100
- sort_by:根据路径的权重排序,选填项,默认为NONE:
- NONE表示不排序,默认值
- INCR表示按照路径权重的升序排序
- DECR表示按照路径权重的降序排序
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.15.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedpaths
+
Request Body
{
"sources":{
"ids":[
@@ -947,8 +947,8 @@
}
]
}
-
3.2.15.3 适用场景
适合查找各种复杂的路径集合,例如:
- 社交网络中,查找看过张艺谋所导演的电影的用户关注的大V的路径(张艺谋—>电影—->用户—>大V)
- 风控网络中,查找多个高风险用户的直系亲属的朋友的路径(高风险用户—>直系亲属—>朋友)
3.2.16 Template Paths
3.2.16.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_times:当前step可以重复的次数,当为N时,表示从起始顶点可以经过当前step 1-N 次
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- with_ring:Boolean值,true表示包含环路;false表示不包含环路,默认为false
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.16.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/templatepaths
-
Request Body
{
+
3.2.15.3 适用场景
适合查找各种复杂的路径集合,例如:
- 社交网络中,查找看过张艺谋所导演的电影的用户关注的大V的路径(张艺谋—>电影—->用户—>大V)
- 风控网络中,查找多个高风险用户的直系亲属的朋友的路径(高风险用户—>直系亲属—>朋友)
3.2.16 Template Paths
3.2.16.1 功能介绍
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
Params
- sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- targets:定义终止顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供终止顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询终止顶点
- label:顶点的类型
- properties:通过属性的值查询终止顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
- steps:表示从起始顶点走过的路径规则,是一组Step的列表。必填项。每个Step的结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_times:当前step可以重复的次数,当为N时,表示从起始顶点可以经过当前step 1-N 次
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
- with_ring:Boolean值,true表示包含环路;false表示不包含环路,默认为false
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的路径的最大数目,选填项,默认为10
- with_vertex:true表示返回结果包含完整的顶点信息(路径中的全部顶点),false时表示只返回顶点id,选填项,默认为false
3.2.16.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/templatepaths
+
Request Body
{
"sources": {
"ids": [],
"label": "person",
@@ -1067,8 +1067,8 @@
}
]
}
-
3.2.16.3 适用场景
适合查找各种复杂的模板路径,比如personA -(朋友)-> personB -(同学)-> personC,其中"朋友"和"同学"边可以分别是最多3层和4层的情况
3.2.17 Crosspoints
3.2.17.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找相交点
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点到目的顶点的方向, 目的点到起始点是反方向,BOTH时不考虑方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的交点的最大数目,选填项,默认为10
3.2.17.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/crosspoints?source="2:lop"&target="2:ripple"&max_depth=5&direction=IN
-
Response Status
200
+
3.2.16.3 适用场景
适合查找各种复杂的模板路径,比如personA -(朋友)-> personB -(同学)-> personC,其中"朋友"和"同学"边可以分别是最多3层和4层的情况
3.2.17 Crosspoints
3.2.17.1 功能介绍
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找相交点
Params
- source:起始顶点id,必填项
- target:目的顶点id,必填项
- direction:起始顶点到目的顶点的方向, 目的点到起始点是反方向,BOTH时不考虑方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的交点的最大数目,选填项,默认为10
3.2.17.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/crosspoints?source="2:lop"&target="2:ripple"&max_depth=5&direction=IN
+
Response Status
200
Response Body
{
"crosspoints":[
{
@@ -1081,8 +1081,8 @@
}
]
}
-
3.2.17.3 适用场景
查找两个顶点的交点及其路径,例如:
- 社交网络中,查找两个用户共同关注的话题或者大V
- 家族关系中,查找共同的祖先
3.2.18 Customized Crosspoints
3.2.18.1 功能介绍
根据一批起始顶点、多种边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径终点的交集
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
path_patterns:表示从起始顶点走过的路径规则,是一组规则的列表。必填项。每个规则是一个PathPattern
- 每个PathPattern是一组Step列表,每个Step结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的路径的最大数目,选填项,默认为10
with_path:true表示返回交点所在的路径,false表示不返回交点所在的路径,选填项,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有交点的完整信息
- false时表示只返回顶点id
3.2.18.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedcrosspoints
-
Request Body
{
+
3.2.17.3 适用场景
查找两个顶点的交点及其路径,例如:
- 社交网络中,查找两个用户共同关注的话题或者大V
- 家族关系中,查找共同的祖先
3.2.18 Customized Crosspoints
3.2.18.1 功能介绍
根据一批起始顶点、多种边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径终点的交集
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
path_patterns:表示从起始顶点走过的路径规则,是一组规则的列表。必填项。每个规则是一个PathPattern
- 每个PathPattern是一组Step列表,每个Step结构如下:
- direction:表示边的方向(OUT,IN,BOTH),默认是BOTH
- labels:边的类型列表
- properties:通过属性的值过滤边
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,默认为 10000 (注: 0.12版之前 step 内仅支持 degree 作为参数名, 0.12开始统一使用 max_degree, 并向下兼容 degree 写法)
- skip_degree:用于设置查询过程中舍弃超级顶点的最小边数,即当某个顶点的邻接边数目大于 skip_degree 时,完全舍弃该顶点。选填项,如果开启时,需满足
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的路径的最大数目,选填项,默认为10
with_path:true表示返回交点所在的路径,false表示不返回交点所在的路径,选填项,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息(路径中的全部顶点)
- with_path为true时,返回所有路径中的顶点的完整信息
- with_path为false时,返回所有交点的完整信息
- false时表示只返回顶点id
3.2.18.2 使用方法
Method & Url
POST http://localhost:8080/graphs/{graph}/traversers/customizedcrosspoints
+
Request Body
{
"sources":{
"ids":[
"2:lop",
@@ -1204,8 +1204,8 @@
}
]
}
-
3.2.18.3 适用场景
查询一组顶点通过多种路径在终点有交集的情况。例如:
- 在商品图谱中,多款手机、学习机、游戏机通过不同的低级别的类目路径,最终都属于一级类目的电子设备
3.2.19 Rings
3.2.19.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找可达的环路
例如:1 -> 25 -> 775 -> 14690 -> 25, 其中环路为 25 -> 775 -> 14690 -> 25
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- source_in_ring:环路是否包含起点,选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的可达环路的最大数目,选填项,默认为10
3.2.19.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rings?source="1:marko"&max_depth=2
-
Response Status
200
+
3.2.18.3 适用场景
查询一组顶点通过多种路径在终点有交集的情况。例如:
- 在商品图谱中,多款手机、学习机、游戏机通过不同的低级别的类目路径,最终都属于一级类目的电子设备
3.2.19 Rings
3.2.19.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找可达的环路
例如:1 -> 25 -> 775 -> 14690 -> 25, 其中环路为 25 -> 775 -> 14690 -> 25
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- source_in_ring:环路是否包含起点,选填项,默认为true
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的可达环路的最大数目,选填项,默认为10
3.2.19.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rings?source="1:marko"&max_depth=2
+
Response Status
200
Response Body
{
"rings":[
{
@@ -1231,8 +1231,8 @@
}
]
}
-
3.2.19.3 适用场景
查询起始顶点可达的环路,例如:
- 风控项目中,查询一个用户可达的循环担保的人或者设备
- 设备关联网络中,发现一个设备周围的循环引用的设备
3.2.20 Rays
3.2.20.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找发散到边界顶点的路径
例如:1 -> 25 -> 775 -> 14690 -> 2289 -> 18379, 其中 18379 为边界顶点,即没有从 18379 发出的边
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的非环路的最大数目,选填项,默认为10
3.2.20.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rays?source="1:marko"&max_depth=2&direction=OUT
-
Response Status
200
+
3.2.19.3 适用场景
查询起始顶点可达的环路,例如:
- 风控项目中,查询一个用户可达的循环担保的人或者设备
- 设备关联网络中,发现一个设备周围的循环引用的设备
3.2.20 Rays
3.2.20.1 功能介绍
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找发散到边界顶点的路径
例如:1 -> 25 -> 775 -> 14690 -> 2289 -> 18379, 其中 18379 为边界顶点,即没有从 18379 发出的边
Params
- source:起始顶点id,必填项
- direction:起始顶点发出的边的方向(OUT,IN,BOTH),选填项,默认是BOTH
- label:边的类型,选填项,默认代表所有edge label
- max_depth:步数,必填项
- max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
- capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
- limit:返回的非环路的最大数目,选填项,默认为10
3.2.20.2 使用方法
Method & Url
GET http://localhost:8080/graphs/{graph}/traversers/rays?source="1:marko"&max_depth=2&direction=OUT
+
Response Status
200
Response Body
{
"rays":[
{
@@ -1263,8 +1263,8 @@
}
]
}
-
3.2.20.3 适用场景
查找起始顶点到某种关系的边界顶点的路径,例如:
- 家族关系中,查找一个人到所有还没有孩子的子孙的路径
- 设备关联网络中,找到某个设备到终端设备的路径
3.2.21 Fusiform Similarity
3.2.21.1 功能介绍
按照条件查询一批顶点对应的"梭形相似点"。当两个顶点跟很多共同的顶点之间有某种关系的时候,我们认为这两个点为"梭形相似点"。举个例子说明"梭形相似点":“读者A"读了100本书,可以定义读过这100本书中的80本以上的读者,是"读者A"的"梭形相似点”
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
label:边的类型,选填项,默认代表所有edge label
direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
min_neighbors:最少邻居数目,邻居数目少于这个阈值时,认为起点不具备"梭形相似点"。比如想要找一个"读者A"读过的书的"梭形相似点",那么min_neighbors
为100时,表示"读者A"至少要读过100本书才可以有"梭形相似点",必填项
alpha:相似度,代表:起点与"梭形相似点"的共同邻居数目占起点的全部邻居数目的比例,必填项
min_similars:“梭形相似点"的最少个数,只有当起点的"梭形相似点"数目大于或等于该值时,才会返回起点及其"梭形相似点”,选填项,默认值为1
top:返回一个起点的"梭形相似点"中相似度最高的top个,必填项,0表示全部
group_property:与min_groups
一起使用,当起点跟其所有的"梭形相似点"某个属性的值有至少min_groups
个不同值时,才会返回该起点及其"梭形相似点"。比如为"读者A"推荐"异地"书友时,需要设置group_property
为读者的"城市"属性,min_group
至少为2,选填项,不填代表不需要根据属性过滤
min_groups:与group_property
一起使用,只有group_property
设置时才有意义
max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的结果数目上限(一个起点及其"梭形相似点"算一个结果),选填项,默认为10
with_intermediary:是否返回起点及其"梭形相似点"共同关联的中间点,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息
- false时表示只返回顶点id
3.2.21.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/fusiformsimilarity
-
Request Body
{
+
3.2.20.3 适用场景
查找起始顶点到某种关系的边界顶点的路径,例如:
- 家族关系中,查找一个人到所有还没有孩子的子孙的路径
- 设备关联网络中,找到某个设备到终端设备的路径
3.2.21 Fusiform Similarity
3.2.21.1 功能介绍
按照条件查询一批顶点对应的"梭形相似点"。当两个顶点跟很多共同的顶点之间有某种关系的时候,我们认为这两个点为"梭形相似点"。举个例子说明"梭形相似点":“读者A"读了100本书,可以定义读过这100本书中的80本以上的读者,是"读者A"的"梭形相似点”
Params
sources:定义起始顶点,必填项,指定方式包括:
- ids:通过顶点id列表提供起始顶点
- label和properties:如果没有指定ids,则使用label和properties的联合条件查询起始顶点
- label:顶点的类型
- properties:通过属性的值查询起始顶点
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
label:边的类型,选填项,默认代表所有edge label
direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
min_neighbors:最少邻居数目,邻居数目少于这个阈值时,认为起点不具备"梭形相似点"。比如想要找一个"读者A"读过的书的"梭形相似点",那么min_neighbors
为100时,表示"读者A"至少要读过100本书才可以有"梭形相似点",必填项
alpha:相似度,代表:起点与"梭形相似点"的共同邻居数目占起点的全部邻居数目的比例,必填项
min_similars:“梭形相似点"的最少个数,只有当起点的"梭形相似点"数目大于或等于该值时,才会返回起点及其"梭形相似点”,选填项,默认值为1
top:返回一个起点的"梭形相似点"中相似度最高的top个,必填项,0表示全部
group_property:与min_groups
一起使用,当起点跟其所有的"梭形相似点"某个属性的值有至少min_groups
个不同值时,才会返回该起点及其"梭形相似点"。比如为"读者A"推荐"异地"书友时,需要设置group_property
为读者的"城市"属性,min_group
至少为2,选填项,不填代表不需要根据属性过滤
min_groups:与group_property
一起使用,只有group_property
设置时才有意义
max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的结果数目上限(一个起点及其"梭形相似点"算一个结果),选填项,默认为10
with_intermediary:是否返回起点及其"梭形相似点"共同关联的中间点,默认为false
with_vertex,选填项,默认为false:
- true表示返回结果包含完整的顶点信息
- false时表示只返回顶点id
3.2.21.2 使用方法
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/fusiformsimilarity
+
Request Body
{
"sources":{
"ids":[],
"label": "person",
@@ -1334,8 +1334,8 @@
}
]
}
-
3.2.21.3 适用场景
查询一组顶点相似度很高的顶点。例如:
- 跟一个读者有类似书单的读者
- 跟一个玩家玩类似游戏的玩家
3.2.22 Vertices
3.2.22.1 根据顶点的id列表,批量查询顶点
Params
- ids:要查询的顶点id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices?ids="1:marko"&ids="2:lop"
-
Response Status
200
+
3.2.21.3 适用场景
查询一组顶点相似度很高的顶点。例如:
- 跟一个读者有类似书单的读者
- 跟一个玩家玩类似游戏的玩家
3.2.22 Vertices
3.2.22.1 根据顶点的id列表,批量查询顶点
Params
- ids:要查询的顶点id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices?ids="1:marko"&ids="2:lop"
+
Response Status
200
Response Body
{
"vertices":[
{
@@ -1390,8 +1390,8 @@
}
]
}
-
3.2.22.2 获取顶点 Shard 信息
通过指定的分片大小split_size,获取顶点分片信息(可以与 3.2.21.3 中的 Scan 配合使用来获取顶点)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/shards?split_size=67108864
-
Response Status
200
+
3.2.22.2 获取顶点 Shard 信息
通过指定的分片大小split_size,获取顶点分片信息(可以与 3.2.21.3 中的 Scan 配合使用来获取顶点)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/shards?split_size=67108864
+
Response Status
200
Response Body
{
"shards":[
{
@@ -1417,8 +1417,8 @@
......
]
}
-
3.2.22.3 根据Shard信息批量获取顶点
通过指定的分片信息批量查询顶点(Shard信息的获取参见 3.2.21.2 Shard)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取顶点时,一页中顶点数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/scan?start=0&end=4294967295
-
Response Status
200
+
3.2.22.3 根据Shard信息批量获取顶点
通过指定的分片信息批量查询顶点(Shard信息的获取参见 3.2.21.2 Shard)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取顶点时,一页中顶点数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/scan?start=0&end=4294967295
+
Response Status
200
Response Body
{
"vertices":[
{
@@ -1573,8 +1573,8 @@
}
]
}
-
3.2.22.4 适用场景
- 按id列表查询顶点,可用于批量查询顶点,比如在path查询到多条路径之后,可以进一步查询某条路径的所有顶点属性。
- 获取分片和按分片查询顶点,可以用来遍历全部顶点
3.2.23 Edges
3.2.23.1 根据边的id列表,批量查询边
Params
- ids:要查询的边id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges?ids="S1:josh>1>>S2:lop"&ids="S1:josh>1>>S2:ripple"
-
Response Status
200
+
3.2.22.4 适用场景
- 按id列表查询顶点,可用于批量查询顶点,比如在path查询到多条路径之后,可以进一步查询某条路径的所有顶点属性。
- 获取分片和按分片查询顶点,可以用来遍历全部顶点
3.2.23 Edges
3.2.23.1 根据边的id列表,批量查询边
Params
- ids:要查询的边id列表
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges?ids="S1:josh>1>>S2:lop"&ids="S1:josh>1>>S2:ripple"
+
Response Status
200
Response Body
{
"edges": [
{
@@ -1605,8 +1605,8 @@
}
]
}
-
3.2.23.2 获取边 Shard 信息
通过指定的分片大小split_size,获取边分片信息(可以与 3.2.22.3 中的 Scan 配合使用来获取边)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/shards?split_size=4294967295
-
Response Status
200
+
3.2.23.2 获取边 Shard 信息
通过指定的分片大小split_size,获取边分片信息(可以与 3.2.22.3 中的 Scan 配合使用来获取边)。
Params
- split_size:分片大小,必填项
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/shards?split_size=4294967295
+
Response Status
200
Response Body
{
"shards":[
{
@@ -1636,8 +1636,8 @@
}
]
}
-
3.2.23.3 根据 Shard 信息批量获取边
通过指定的分片信息批量查询边(Shard信息的获取参见 3.2.22.2)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取边时,一页中边数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/scan?start=0&end=3221225469
-
Response Status
200
+
3.2.23.3 根据 Shard 信息批量获取边
通过指定的分片信息批量查询边(Shard信息的获取参见 3.2.22.2)。
Params
- start:分片起始位置,必填项
- end:分片结束位置,必填项
- page:分页位置,选填项,默认为null,不分页;当page为“”时表示分页的第一页,从start指示的位置开始
- page_limit:分页获取边时,一页中边数目的上限,选填项,默认为100000
Method & Url
GET http://localhost:8080/graphs/hugegraph/traversers/edges/scan?start=0&end=3221225469
+
Response Status
200
Response Body
{
"edges":[
{
diff --git a/cn/docs/clients/restful-api/variable/index.html b/cn/docs/clients/restful-api/variable/index.html
index 468626577..dcd596eef 100644
--- a/cn/docs/clients/restful-api/variable/index.html
+++ b/cn/docs/clients/restful-api/variable/index.html
@@ -11,26 +11,26 @@
Create child page
Create documentation issue
Create project issue
- Print entire section
Variable API
5.1 Variables
Variables可以用来存储有关整个图的数据,数据按照键值对的方式存取
5.1.1 创建或者更新某个键值对
Method & Url
PUT http://localhost:8080/graphs/hugegraph/variables/name
-
Request Body
Variable API
5.1 Variables
Variables可以用来存储有关整个图的数据,数据按照键值对的方式存取
5.1.1 创建或者更新某个键值对
Method & Url
PUT http://localhost:8080/graphs/hugegraph/variables/name
+
Request Body
{
"data": "tom"
}
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.2 列出全部键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables
-
Response Status
200
+
5.1.2 列出全部键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables
+
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.3 列出某个键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables/name
-
Response Status
200
+
5.1.3 列出某个键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables/name
+
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.4 删除某个键值对
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/variables/name
-
Response Status
204
+
5.1.4 删除某个键值对
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/variables/name
+
Response Status
204
Last modified April 17, 2022: rebuild doc (ef36544)
diff --git a/cn/docs/clients/restful-api/vertex/index.html b/cn/docs/clients/restful-api/vertex/index.html
index d1c899a76..2f0f7fa41 100644
--- a/cn/docs/clients/restful-api/vertex/index.html
+++ b/cn/docs/clients/restful-api/vertex/index.html
@@ -33,8 +33,8 @@
Create child page
Create documentation issue
Create project issue
- Print entire sectionVertex API
2.1 Vertex
顶点类型中的 Id 策略决定了顶点的 Id 类型,其对应关系如下:
Id_Strategy id type AUTOMATIC number PRIMARY_KEY string CUSTOMIZE_STRING string CUSTOMIZE_NUMBER number CUSTOMIZE_UUID uuid
顶点的 GET/PUT/DELETE
API 中 url 的 id 部分传入的应是带有类型信息的 id 值,这个类型信息用 json 串是否带引号表示,也就是说:
- 当 id 类型为 number 时,url 中的 id 不带引号,形如 xxx/vertices/123456
- 当 id 类型为 string 时,url 中的 id 带引号,形如 xxx/vertices/“123456”
接下来的示例均假设已经创建好了前述的各种 schema 信息
2.1.1 创建一个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices
-
Request Body
Vertex API
2.1 Vertex
顶点类型中的 Id 策略决定了顶点的 Id 类型,其对应关系如下:
Id_Strategy id type AUTOMATIC number PRIMARY_KEY string CUSTOMIZE_STRING string CUSTOMIZE_NUMBER number CUSTOMIZE_UUID uuid
顶点的 GET/PUT/DELETE
API 中 url 的 id 部分传入的应是带有类型信息的 id 值,这个类型信息用 json 串是否带引号表示,也就是说:
- 当 id 类型为 number 时,url 中的 id 不带引号,形如 xxx/vertices/123456
- 当 id 类型为 string 时,url 中的 id 带引号,形如 xxx/vertices/“123456”
接下来的示例均假设已经创建好了前述的各种 schema 信息
2.1.1 创建一个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices
+
Request Body
{
"label": "person",
"properties": {
"name": "marko",
@@ -61,8 +61,8 @@
]
}
}
-
2.1.2 创建多个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices/batch
-
Request Body
[
+
2.1.2 创建多个顶点
Method & Url
POST http://localhost:8080/graphs/hugegraph/graph/vertices/batch
+
Request Body
[
{
"label": "person",
"properties": {
@@ -84,8 +84,8 @@
"1:marko",
"2:ripple"
]
-
2.1.3 更新顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=append
-
Request Body
{
+
2.1.3 更新顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=append
+
Request Body
{
"label": "person",
"properties": {
"age": 30,
@@ -147,8 +147,8 @@
}
]
}
-
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/batch
-
Request Body
{
+
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/batch
+
Request Body
{
"vertices":[
{
"label":"software",
@@ -212,8 +212,8 @@
}
]
}
-
结果分析:
- lang 属性未指定更新策略,直接用新值覆盖旧值,无论新值是否为null;
- price 属性指定 BIGGER 的更新策略,旧属性值为328,新属性值为299,所以仍然保留了旧属性值328;
- age 属性指定 OVERRIDE 更新策略,而新属性值中未传入age,相当于age为null,所以仍然保留了原属性值32;
- city 属性也指定了 OVERRIDE 更新策略,且新属性值不为null,所以覆盖了旧值;
- weight 属性指定了 SUM 更新策略,旧属性值为0.1,新属性值为0.2,最后的值为0.3;
- hobby 属性(基数为Set)指定了 UNION 更新策略,所以新值与旧值取了并集;
其他的更新策略使用方式可以类推,不再赘述。
2.1.5 删除顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=eliminate
-
Request Body
{
+
结果分析:
- lang 属性未指定更新策略,直接用新值覆盖旧值,无论新值是否为null;
- price 属性指定 BIGGER 的更新策略,旧属性值为328,新属性值为299,所以仍然保留了旧属性值328;
- age 属性指定 OVERRIDE 更新策略,而新属性值中未传入age,相当于age为null,所以仍然保留了原属性值32;
- city 属性也指定了 OVERRIDE 更新策略,且新属性值不为null,所以覆盖了旧值;
- weight 属性指定了 SUM 更新策略,旧属性值为0.1,新属性值为0.2,最后的值为0.3;
- hobby 属性(基数为Set)指定了 UNION 更新策略,所以新值与旧值取了并集;
其他的更新策略使用方式可以类推,不再赘述。
2.1.5 删除顶点属性
Method & Url
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=eliminate
+
Request Body
{
"label": "person",
"properties": {
"city": "Beijing"
@@ -239,8 +239,8 @@
]
}
}
-
2.1.6 获取符合条件的顶点
Params
- label: 顶点类型
- properties: 属性键值对(根据属性查询的前提是预先建立了索引)
- limit: 查询最大数目
- page: 页号
以上参数都是可选的,如果提供page参数,必须提供limit参数,不允许带其他参数。label, properties
和limit
可以任意组合。
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"age":29}
,范围匹配时形如properties={"age":"P.gt(29)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的顶点 P.neq(number) 属性值不等于number的顶点 P.lt(number) 属性值小于number的顶点 P.lte(number) 属性值小于等于number的顶点 P.gt(number) 属性值大于number的顶点 P.gte(number) 属性值大于等于number的顶点 P.between(number1,number2) 属性值大于等于number1且小于number2的顶点 P.inside(number1,number2) 属性值大于number1且小于number2的顶点 P.outside(number1,number2) 属性值小于number1且大于number2的顶点 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的顶点
查询所有 age 为 20 且 label 为 person 的顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?label=person&properties={"age":29}&limit=1
-
Response Status
200
+
2.1.6 获取符合条件的顶点
Params
- label: 顶点类型
- properties: 属性键值对(根据属性查询的前提是预先建立了索引)
- limit: 查询最大数目
- page: 页号
以上参数都是可选的,如果提供page参数,必须提供limit参数,不允许带其他参数。label, properties
和limit
可以任意组合。
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如properties={"age":29}
,范围匹配时形如properties={"age":"P.gt(29)"}
,范围匹配支持的表达式如下:
表达式 说明 P.eq(number) 属性值等于number的顶点 P.neq(number) 属性值不等于number的顶点 P.lt(number) 属性值小于number的顶点 P.lte(number) 属性值小于等于number的顶点 P.gt(number) 属性值大于number的顶点 P.gte(number) 属性值大于等于number的顶点 P.between(number1,number2) 属性值大于等于number1且小于number2的顶点 P.inside(number1,number2) 属性值大于number1且小于number2的顶点 P.outside(number1,number2) 属性值小于number1且大于number2的顶点 P.within(value1,value2,value3,…) 属性值等于任何一个给定value的顶点
查询所有 age 为 20 且 label 为 person 的顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?label=person&properties={"age":29}&limit=1
+
Response Status
200
Response Body
{
"vertices": [
{
@@ -270,8 +270,8 @@
}
]
}
-
分页查询所有顶点,获取第一页(page不带参数值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3
-
Response Status
200
+
分页查询所有顶点,获取第一页(page不带参数值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3
+
Response Status
200
Response Body
{
"vertices": [{
"id": "2:ripple",
@@ -334,8 +334,8 @@
"page": "001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004"
}
返回的body里面是带有下一页的页号信息的,"page": "001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004"
,
-在查询下一页的时候将该值赋给page参数。
分页查询所有顶点,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page=001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004&limit=3
-
Response Status
200
+在查询下一页的时候将该值赋给page参数。分页查询所有顶点,获取下一页(page带上上一页返回的page值),限定3条
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page=001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004&limit=3
+
Response Status
200
Response Body
{
"vertices": [{
"id": "1:josh",
@@ -397,8 +397,8 @@
],
"page": null
}
-
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.1.7 根据Id获取顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
-
Response Status
200
+
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
2.1.7 根据Id获取顶点
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
+
Response Status
200
Response Body
{
"id": "1:marko",
"label": "person",
@@ -418,10 +418,10 @@
]
}
}
-
2.1.8 根据Id删除顶点
Params
- label: 顶点类型,可选参数
仅根据Id删除顶点
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
-
Response Status
204
-
根据Label+Id删除顶点
通过指定Label参数和Id来删除顶点时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
-
Response Status
204
+
2.1.8 根据Id删除顶点
Params
- label: 顶点类型,可选参数
仅根据Id删除顶点
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
+
Response Status
204
+
根据Label+Id删除顶点
通过指定Label参数和Id来删除顶点时,一般来说其性能比仅根据Id删除会更好。
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
+
Response Status
204
Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
diff --git a/cn/docs/clients/restful-api/vertexlabel/index.html b/cn/docs/clients/restful-api/vertexlabel/index.html
index 30cf5434c..57c9981d1 100644
--- a/cn/docs/clients/restful-api/vertexlabel/index.html
+++ b/cn/docs/clients/restful-api/vertexlabel/index.html
@@ -16,8 +16,8 @@
Create child page
Create documentation issue
Create project issue
- Print entire sectionVertexLabel API
1.3 VertexLabel
假设已经创建好了1.1.3中列出来的 PropertyKeys
Params说明
- id:顶点类型id值
- name:顶点类型名称,必填
- id_strategy: 顶点类型的ID策略,主键ID、自动生成、自定义字符串、自定义数字、自定义UUID,默认主键ID
- properties: 顶点类型关联的属性类型
- primary_keys: 主键属性,当ID策略为PRIMARY_KEY时必须有值,其他ID策略时必须为空;
- enable_label_index: 是否开启类型索引,默认关闭
- index_names:顶点类型创建的索引,详情见3.4
- nullable_keys:可为空的属性
- user_data:设置顶点类型的通用信息,作用同属性类型
1.3.1 创建一个VertexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/vertexlabels
-
Request Body
VertexLabel API
1.3 VertexLabel
假设已经创建好了1.1.3中列出来的 PropertyKeys
Params说明
- id:顶点类型id值
- name:顶点类型名称,必填
- id_strategy: 顶点类型的ID策略,主键ID、自动生成、自定义字符串、自定义数字、自定义UUID,默认主键ID
- properties: 顶点类型关联的属性类型
- primary_keys: 主键属性,当ID策略为PRIMARY_KEY时必须有值,其他ID策略时必须为空;
- enable_label_index: 是否开启类型索引,默认关闭
- index_names:顶点类型创建的索引,详情见3.4
- nullable_keys:可为空的属性
- user_data:设置顶点类型的通用信息,作用同属性类型
1.3.1 创建一个VertexLabel
Method & Url
POST http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+
Request Body
{
"name": "person",
"id_strategy": "DEFAULT",
"properties": [
@@ -79,8 +79,8 @@
"ttl_start_time": "createdTime",
"enable_label_index": true
}
-
1.3.2 为已存在的VertexLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person?action=append
-
Request Body
{
+
1.3.2 为已存在的VertexLabel添加properties或userdata,或者移除userdata(目前不支持移除properties)
Params
- action: 表示当前行为是添加还是移除,取值为
append
(添加)和eliminate
(移除)
Method & Url
PUT http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person?action=append
+
Request Body
{
"name": "person",
"properties": [
"city"
@@ -113,8 +113,8 @@
"super": "animal"
}
}
-
1.3.3 获取所有的VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
-
Response Status
200
+
1.3.3 获取所有的VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+
Response Status
200
Response Body
{
"vertexlabels": [
{
@@ -161,8 +161,8 @@
}
]
}
-
1.3.4 根据name获取VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
-
Response Status
200
+
1.3.4 根据name获取VertexLabel
Method & Url
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
+
Response Status
200
Response Body
{
"id": 1,
"primary_keys": [
@@ -185,8 +185,8 @@
"super": "animal"
}
}
-
1.3.5 根据name删除VertexLabel
删除 VertexLabel 会导致删除对应的顶点以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
-
Response Status
202
+
1.3.5 根据name删除VertexLabel
删除 VertexLabel 会导致删除对应的顶点以及相关的索引数据,会产生一个异步任务
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
+
Response Status
202
Response Body
{
"task_id": 1
}
diff --git a/cn/docs/config/_print/index.html b/cn/docs/config/_print/index.html
index a2d247ee7..099180529 100644
--- a/cn/docs/config/_print/index.html
+++ b/cn/docs/config/_print/index.html
@@ -89,117 +89,117 @@
}
上面的配置项很多,但目前只需要关注如下几个配置项:channelizer 和 graphs。
- graphs:GremlinServer 启动时需要打开的图,该项是一个 map 结构,key 是图的名字,value 是该图的配置文件路径;
- channelizer:GremlinServer 与客户端有两种通信方式,分别是 WebSocket 和 HTTP(默认)。如果选择 WebSocket,
用户可以通过 Gremlin-Console 快速体验 HugeGraph 的特性,但是不支持大规模数据导入,
-推荐使用 HTTP 的通信方式,HugeGraph 的外围组件都是基于 HTTP 实现的;
默认GremlinServer是服务在 localhost:8182,如果需要修改,配置 host、port 即可
- host:部署 GremlinServer 机器的机器名或 IP,目前 HugeGraphServer 不支持分布式部署,且GremlinServer不直接暴露给用户;
- port:部署 GremlinServer 机器的端口;
同时需要在 rest-server.properties 中增加对应的配置项 gremlinserver.url=http://host:port
3 rest-server.properties
rest-server.properties 文件的默认内容如下:
# bind url
-restserver.url=http://127.0.0.1:8080
-# gremlin server url, need to be consistent with host and port in gremlin-server.yaml
-#gremlinserver.url=http://127.0.0.1:8182
-
-# graphs list with pair NAME:CONF_PATH
-graphs=[hugegraph:conf/hugegraph.properties]
-
-# authentication
-#auth.authenticator=
-#auth.admin_token=
-#auth.user_tokens=[]
-
-server.id=server-1
-server.role=master
-
- restserver.url:RestServer 提供服务的 url,根据实际环境修改;
- graphs:RestServer 启动时也需要打开图,该项为 map 结构,key 是图的名字,value 是该图的配置文件路径;
注意:gremlin-server.yaml 和 rest-server.properties 都包含 graphs 配置项,而 init-store
命令是根据 gremlin-server.yaml 的 graphs 下的图进行初始化的。
配置项 gremlinserver.url 是 GremlinServer 为 RestServer 提供服务的 url,该配置项默认为 http://localhost:8182,如需修改,需要和 gremlin-server.yaml 中的 host 和 port 相匹配;
4 hugegraph.properties
hugegraph.properties 是一类文件,因为如果系统存在多个图,则会有多个相似的文件。该文件用来配置与图存储和查询相关的参数,文件的默认内容如下:
# gremlin entrence to create graph
-gremlin.graph=com.baidu.hugegraph.HugeFactory
-
-# cache config
-#schema.cache_capacity=100000
-# vertex-cache default is 1000w, 10min expired
-#vertex.cache_capacity=10000000
-#vertex.cache_expire=600
-# edge-cache default is 100w, 10min expired
-#edge.cache_capacity=1000000
-#edge.cache_expire=600
-
-# schema illegal name template
-#schema.illegal_name_regex=\s+|~.*
-
-#vertex.default_label=vertex
-
-backend=rocksdb
-serializer=binary
-
-store=hugegraph
-
-raft.mode=false
-raft.safe_read=false
-raft.use_snapshot=false
-raft.endpoint=127.0.0.1:8281
-raft.group_peers=127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283
-raft.path=./raft-log
-raft.use_replicator_pipeline=true
-raft.election_timeout=10000
-raft.snapshot_interval=3600
-raft.backend_threads=48
-raft.read_index_threads=8
-raft.queue_size=16384
-raft.queue_publish_timeout=60
-raft.apply_batch=1
-raft.rpc_threads=80
-raft.rpc_connect_timeout=5000
-raft.rpc_timeout=60000
-
-# if use 'ikanalyzer', need download jar from 'https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory
-search.text_analyzer=jieba
-search.text_analyzer_mode=INDEX
-
-# rocksdb backend config
-#rocksdb.data_path=/path/to/disk
-#rocksdb.wal_path=/path/to/disk
-
-# cassandra backend config
-cassandra.host=localhost
-cassandra.port=9042
-cassandra.username=
-cassandra.password=
-#cassandra.connect_timeout=5
-#cassandra.read_timeout=20
-#cassandra.keyspace.strategy=SimpleStrategy
-#cassandra.keyspace.replication=3
-
-# hbase backend config
-#hbase.hosts=localhost
-#hbase.port=2181
-#hbase.znode_parent=/hbase
-#hbase.threads_max=64
-
-# mysql backend config
-#jdbc.driver=com.mysql.jdbc.Driver
-#jdbc.url=jdbc:mysql://127.0.0.1:3306
-#jdbc.username=root
-#jdbc.password=
-#jdbc.reconnect_max_times=3
-#jdbc.reconnect_interval=3
-#jdbc.sslmode=false
-
-# postgresql & cockroachdb backend config
-#jdbc.driver=org.postgresql.Driver
-#jdbc.url=jdbc:postgresql://localhost:5432/
-#jdbc.username=postgres
-#jdbc.password=
-
-# palo backend config
-#palo.host=127.0.0.1
-#palo.poll_interval=10
-#palo.temp_dir=./palo-data
-#palo.file_limit_size=32
-
重点关注未注释的几项:
- gremlin.graph:GremlinServer 的启动入口,用户不要修改此项;
- backend:使用的后端存储,可选值有 memory、cassandra、scylladb 和 rocksdb;
- serializer:主要为内部使用,用于将 schema、vertex 和 edge 序列化到后端,对应的可选值为 text、cassandra、scylladb 和 binary;(注:rocksdb后端值需是binary,其他后端backend与serializer值需保持一致,如hbase后端该值为hbase)
- store:图存储到后端使用的数据库名,在 cassandra 和 scylladb 中就是 keyspace 名,此项的值与 GremlinServer 和 RestServer 中的图名并无关系,但是出于直观考虑,建议仍然使用相同的名字;
- cassandra.host:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的 seeds;
- cassandra.port:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的 native port;
- rocksdb.data_path:backend 为 rocksdb 时此项才有意义,rocksdb 的数据目录
- rocksdb.wal_path:backend 为 rocksdb 时此项才有意义,rocksdb 的日志目录
- admin.token: 通过一个token来获取服务器的配置信息,例如:http://localhost:8080/graphs/hugegraph/conf?token=162f7848-0b6d-4faf-b557-3a0797869c55
5 多图配置
我们的系统是可以存在多个图的,并且各个图的后端可以不一样,比如图 hugegraph 和 hugegraph1,其中 hugegraph 以 cassandra 作为后端,hugegraph1 以 rocksdb作为后端。
配置方法也很简单:
修改 gremlin-server.yaml
在 gremlin-server.yaml 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs: {
+推荐使用 HTTP 的通信方式,HugeGraph 的外围组件都是基于 HTTP 实现的;默认GremlinServer是服务在 localhost:8182,如果需要修改,配置 host、port 即可
- host:部署 GremlinServer 机器的机器名或 IP,目前 HugeGraphServer 不支持分布式部署,且GremlinServer不直接暴露给用户;
- port:部署 GremlinServer 机器的端口;
同时需要在 rest-server.properties 中增加对应的配置项 gremlinserver.url=http://host:port
3 rest-server.properties
rest-server.properties 文件的默认内容如下:
# bind url
+restserver.url=http://127.0.0.1:8080
+# gremlin server url, need to be consistent with host and port in gremlin-server.yaml
+#gremlinserver.url=http://127.0.0.1:8182
+
+# graphs list with pair NAME:CONF_PATH
+graphs=[hugegraph:conf/hugegraph.properties]
+
+# authentication
+#auth.authenticator=
+#auth.admin_token=
+#auth.user_tokens=[]
+
+server.id=server-1
+server.role=master
+
- restserver.url:RestServer 提供服务的 url,根据实际环境修改;
- graphs:RestServer 启动时也需要打开图,该项为 map 结构,key 是图的名字,value 是该图的配置文件路径;
注意:gremlin-server.yaml 和 rest-server.properties 都包含 graphs 配置项,而 init-store
命令是根据 gremlin-server.yaml 的 graphs 下的图进行初始化的。
配置项 gremlinserver.url 是 GremlinServer 为 RestServer 提供服务的 url,该配置项默认为 http://localhost:8182,如需修改,需要和 gremlin-server.yaml 中的 host 和 port 相匹配;
4 hugegraph.properties
hugegraph.properties 是一类文件,因为如果系统存在多个图,则会有多个相似的文件。该文件用来配置与图存储和查询相关的参数,文件的默认内容如下:
# gremlin entrence to create graph
+gremlin.graph=com.baidu.hugegraph.HugeFactory
+
+# cache config
+#schema.cache_capacity=100000
+# vertex-cache default is 1000w, 10min expired
+#vertex.cache_capacity=10000000
+#vertex.cache_expire=600
+# edge-cache default is 100w, 10min expired
+#edge.cache_capacity=1000000
+#edge.cache_expire=600
+
+# schema illegal name template
+#schema.illegal_name_regex=\s+|~.*
+
+#vertex.default_label=vertex
+
+backend=rocksdb
+serializer=binary
+
+store=hugegraph
+
+raft.mode=false
+raft.safe_read=false
+raft.use_snapshot=false
+raft.endpoint=127.0.0.1:8281
+raft.group_peers=127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283
+raft.path=./raft-log
+raft.use_replicator_pipeline=true
+raft.election_timeout=10000
+raft.snapshot_interval=3600
+raft.backend_threads=48
+raft.read_index_threads=8
+raft.queue_size=16384
+raft.queue_publish_timeout=60
+raft.apply_batch=1
+raft.rpc_threads=80
+raft.rpc_connect_timeout=5000
+raft.rpc_timeout=60000
+
+# if use 'ikanalyzer', need download jar from 'https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory
+search.text_analyzer=jieba
+search.text_analyzer_mode=INDEX
+
+# rocksdb backend config
+#rocksdb.data_path=/path/to/disk
+#rocksdb.wal_path=/path/to/disk
+
+# cassandra backend config
+cassandra.host=localhost
+cassandra.port=9042
+cassandra.username=
+cassandra.password=
+#cassandra.connect_timeout=5
+#cassandra.read_timeout=20
+#cassandra.keyspace.strategy=SimpleStrategy
+#cassandra.keyspace.replication=3
+
+# hbase backend config
+#hbase.hosts=localhost
+#hbase.port=2181
+#hbase.znode_parent=/hbase
+#hbase.threads_max=64
+
+# mysql backend config
+#jdbc.driver=com.mysql.jdbc.Driver
+#jdbc.url=jdbc:mysql://127.0.0.1:3306
+#jdbc.username=root
+#jdbc.password=
+#jdbc.reconnect_max_times=3
+#jdbc.reconnect_interval=3
+#jdbc.sslmode=false
+
+# postgresql & cockroachdb backend config
+#jdbc.driver=org.postgresql.Driver
+#jdbc.url=jdbc:postgresql://localhost:5432/
+#jdbc.username=postgres
+#jdbc.password=
+
+# palo backend config
+#palo.host=127.0.0.1
+#palo.poll_interval=10
+#palo.temp_dir=./palo-data
+#palo.file_limit_size=32
+
重点关注未注释的几项:
- gremlin.graph:GremlinServer 的启动入口,用户不要修改此项;
- backend:使用的后端存储,可选值有 memory、cassandra、scylladb、mysql、hbase、postgresql 和 rocksdb;
- serializer:主要为内部使用,用于将 schema、vertex 和 edge 序列化到后端,对应的可选值为 text、cassandra、scylladb 和 binary;(注:rocksdb后端值需是binary,其他后端backend与serializer值需保持一致,如hbase后端该值为hbase)
- store:图存储到后端使用的数据库名,在 cassandra 和 scylladb 中就是 keyspace 名,此项的值与 GremlinServer 和 RestServer 中的图名并无关系,但是出于直观考虑,建议仍然使用相同的名字;
- cassandra.host:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的 seeds;
- cassandra.port:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的 native port;
- rocksdb.data_path:backend 为 rocksdb 时此项才有意义,rocksdb 的数据目录
- rocksdb.wal_path:backend 为 rocksdb 时此项才有意义,rocksdb 的日志目录
- admin.token: 通过一个token来获取服务器的配置信息,例如:http://localhost:8080/graphs/hugegraph/conf?token=162f7848-0b6d-4faf-b557-3a0797869c55
5 多图配置
我们的系统是可以存在多个图的,并且各个图的后端可以不一样,比如图 hugegraph 和 hugegraph1,其中 hugegraph 以 cassandra 作为后端,hugegraph1 以 rocksdb作为后端。
配置方法也很简单:
修改 gremlin-server.yaml
在 gremlin-server.yaml 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs: {
hugegraph: conf/hugegraph.properties,
hugegraph1: conf/hugegraph1.properties
}
-
修改 rest-server.properties
在 rest-server.properties 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs=[hugegraph:conf/hugegraph.properties, hugegraph1:conf/hugegraph1.properties]
-
添加 hugegraph1.properties
拷贝 hugegraph.properties,命名为 hugegraph1.properties,修改图对应的数据库名以及关于后端部分的参数,比如:
store=hugegraph1
-
-...
-
-backend=rocksdb
-serializer=binary
-
停止 Server,初始化执行 init-store.sh(为新的图创建数据库),重新启动 Server
$ bin/stop-hugegraph.sh
+
修改 rest-server.properties
在 rest-server.properties 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs=[hugegraph:conf/hugegraph.properties, hugegraph1:conf/hugegraph1.properties]
+
添加 hugegraph1.properties
拷贝 hugegraph.properties,命名为 hugegraph1.properties,修改图对应的数据库名以及关于后端部分的参数,比如:
store=hugegraph1
+
+...
+
+backend=rocksdb
+serializer=binary
+
停止 Server,初始化执行 init-store.sh(为新的图创建数据库),重新启动 Server
$ bin/stop-hugegraph.sh
$ bin/init-store.sh
$ bin/start-hugegraph.sh
2 - HugeGraph 配置项
Gremlin Server 配置项
对应配置文件gremlin-server.yaml
config option default value description host 127.0.0.1 The host or ip of Gremlin Server. port 8182 The listening port of Gremlin Server. graphs hugegraph: conf/hugegraph.properties The map of graphs with name and config file path. scriptEvaluationTimeout 30000 The timeout for gremlin script execution(millisecond). channelizer org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer Indicates the protocol which the Gremlin Server provides service. authentication authenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties} The authenticator and config(contains tokens path) of authentication mechanism.
Rest Server & API 配置项
对应配置文件rest-server.properties
config option default value description graphs [hugegraph:conf/hugegraph.properties] The map of graphs’ name and config file. server.id server-1 The id of rest server, used for license verification. server.role master The role of nodes in the cluster, available types are [master, worker, computer] restserver.url http://127.0.0.1:8080 The url for listening of rest server. ssl.keystore_file server.keystore The path of server keystore file used when https protocol is enabled. ssl.keystore_password The password of the path of the server keystore file used when the https protocol is enabled. restserver.max_worker_threads 2 * CPUs The maximum worker threads of rest server. restserver.min_free_memory 64 The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value. restserver.request_timeout 30 The time in seconds within which a request must complete, -1 means no timeout. restserver.connection_idle_timeout 30 The time in seconds to keep an inactive connection alive, -1 means no timeout. restserver.connection_max_requests 256 The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited. gremlinserver.url http://127.0.0.1:8182 The url of gremlin server. gremlinserver.max_route 8 The max route number for gremlin server. gremlinserver.timeout 30 The timeout in seconds of waiting for gremlin server. batch.max_edges_per_batch 500 The maximum number of edges submitted per batch. batch.max_vertices_per_batch 500 The maximum number of vertices submitted per batch. batch.max_write_ratio 50 The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0. batch.max_write_threads 0 The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads. auth.authenticator The class path of authenticator implementation. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator. auth.admin_token 162f7848-0b6d-4faf-b557-3a0797869c55 Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator. auth.graph_store hugegraph The name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator. auth.user_tokens [hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31] The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator. auth.audit_log_rate 1000.0 The max rate of audit log output per user, default value is 1000 records per second. auth.cache_capacity 10240 The max cache capacity of each auth cache item. auth.cache_expire 600 The expiration time in seconds of vertex cache. auth.remote_url If the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ‘,’. auth.token_expire 86400 The expiration time in seconds after token created auth.token_secret FXQXbJtbCLxODc6tGci732pkH1cyf8Qg Secret key of HS256 algorithm. exception.allow_trace false Whether to allow exception trace stack.
基本配置项
基本配置项及后端配置项对应配置文件:{graph-name}.properties,如hugegraph.properties
config option default value description gremlin.graph com.baidu.hugegraph.HugeFactory Gremlin entrance to create graph. backend rocksdb The data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql]. serializer binary The serializer for backend store, available values are [text, binary, cassandra, hbase, mysql]. store hugegraph The database name like Cassandra Keyspace. store.connection_detect_interval 600 The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time. store.graph g The graph table name, which store vertex, edge and property. store.schema m The schema table name, which store meta data. store.system s The system table name, which store system data. schema.illegal_name_regex .\s+$|~. The regex specified the illegal format for schema name. schema.cache_capacity 10000 The max cache size(items) of schema cache. vertex.cache_type l2 The type of vertex cache, allowed values are [l1, l2]. vertex.cache_capacity 10000000 The max cache size(items) of vertex cache. vertex.cache_expire 600 The expire time in seconds of vertex cache. vertex.check_customized_id_exist false Whether to check the vertices exist for those using customized id strategy. vertex.default_label vertex The default vertex label. vertex.tx_capacity 10000 The max size(items) of vertices(uncommitted) in transaction. vertex.check_adjacent_vertex_exist false Whether to check the adjacent vertices of edges exist. vertex.lazy_load_adjacent_vertex true Whether to lazy load adjacent vertices of edges. vertex.part_edge_commit_size 5000 Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled. vertex.encode_primary_key_number true Whether to encode number value of primary key in vertex id. vertex.remove_left_index_at_overwrite false Whether remove left index at overwrite. edge.cache_type l2 The type of edge cache, allowed values are [l1, l2]. edge.cache_capacity 1000000 The max cache size(items) of edge cache. edge.cache_expire 600 The expiration time in seconds of edge cache. edge.tx_capacity 10000 The max size(items) of edges(uncommitted) in transaction. query.page_size 500 The size of each page when querying by paging. query.batch_size 1000 The size of each batch when querying by batch. query.ignore_invalid_data true Whether to ignore invalid data of vertex or edge. query.index_intersect_threshold 1000 The maximum number of intermediate results to intersect indexes when querying by multiple single index properties. query.ramtable_edges_capacity 20000000 The maximum number of edges in ramtable, include OUT and IN edges. query.ramtable_enable false Whether to enable ramtable for query of adjacent edges. query.ramtable_vertices_capacity 10000000 The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity. query.optimize_aggregate_by_index false Whether to optimize aggregate query(like count) by index. oltp.concurrent_depth 10 The min depth to enable concurrent oltp algorithm. oltp.concurrent_threads 10 Thread number to concurrently execute oltp algorithm. oltp.collection_type EC The implementation type of collections used in oltp algorithm. rate_limit.read 0 The max rate(times/s) to execute query of vertices/edges. rate_limit.write 0 The max rate(items/s) to add/update/delete vertices/edges. task.wait_timeout 10 Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend. task.input_size_limit 16777216 The job input size limit in bytes. task.result_size_limit 16777216 The job result size limit in bytes. task.sync_deletion false Whether to delete schema or expired data synchronously. task.ttl_delete_batch 1 The batch size used to delete expired data. computer.config /conf/computer.yaml The config file path of computer job. search.text_analyzer ikanalyzer Choose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer]. # if use ‘ikanalyzer’, need download jar from ‘https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory search.text_analyzer_mode smart Specify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}. snowflake.datecenter_id 0 The datacenter id of snowflake id generator. snowflake.force_string false Whether to force the snowflake long id to be a string. snowflake.worker_id 0 The worker id of snowflake id generator. raft.mode false Whether the backend storage works in raft mode. raft.safe_read false Whether to use linearly consistent read. raft.use_snapshot false Whether to use snapshot. raft.endpoint 127.0.0.1:8281 The peerid of current raft node. raft.group_peers 127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283 The peers of current raft group. raft.path ./raft-log The log path of current raft node. raft.use_replicator_pipeline true Whether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn’t have to wait for the ack message of the current log to be sent. raft.election_timeout 10000 Timeout in milliseconds to launch a round of election. raft.snapshot_interval 3600 The interval in seconds to trigger snapshot save. raft.backend_threads current CPU v-cores The thread number used to apply task to backend. raft.read_index_threads 8 The thread number used to execute reading index. raft.apply_batch 1 The apply batch size to trigger disruptor event handler. raft.queue_size 16384 The disruptor buffers size for jraft RaftNode, StateMachine and LogManager. raft.queue_publish_timeout 60 The timeout in second when publish event into disruptor. raft.rpc_threads 80 The rpc threads for jraft RPC layer. raft.rpc_connect_timeout 5000 The rpc connect timeout for jraft rpc. raft.rpc_timeout 60000 The rpc timeout for jraft rpc. raft.rpc_buf_low_water_mark 10485760 The ChannelOutboundBuffer’s low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network. raft.rpc_buf_high_water_mark 20971520 The ChannelOutboundBuffer’s high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time. raft.read_strategy ReadOnlyLeaseBased The linearizability of read strategy.
RPC server 配置
config option default value description rpc.client_connect_timeout 20 The timeout(in seconds) of rpc client connect to rpc server. rpc.client_load_balancer consistentHash The rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is ‘consistentHash’, means forwarding by request parameters. rpc.client_read_timeout 40 The timeout(in seconds) of rpc client read from rpc server. rpc.client_reconnect_period 10 The period(in seconds) of rpc client reconnect to rpc server. rpc.client_retries 3 Failed retry number of rpc client calls to rpc server. rpc.config_order 999 Sofa rpc configuration file loading order, the larger the more later loading. rpc.logger_impl com.alipay.sofa.rpc.log.SLF4JLoggerImpl Sofa rpc log implementation class. rpc.protocol bolt Rpc communication protocol, client and server need to be specified the same value. rpc.remote_url The remote urls of rpc peers, it can be set to multiple addresses, which are concat by ‘,’, empty value means not enabled. rpc.server_adaptive_port false Whether the bound port is adaptive, if it’s enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts. rpc.server_host The hosts/ips bound by rpc server to provide services, empty value means not enabled. rpc.server_port 8090 The port bound by rpc server to provide services. rpc.server_timeout 30 The timeout(in seconds) of rpc server execution.
Cassandra 后端配置项
config option default value description backend Must be set to cassandra
. serializer Must be set to cassandra
. cassandra.host localhost The seeds hostname or ip address of cassandra cluster. cassandra.port 9042 The seeds port address of cassandra cluster. cassandra.connect_timeout 5 The cassandra driver connect server timeout(seconds). cassandra.read_timeout 20 The cassandra driver read from server timeout(seconds). cassandra.keyspace.strategy SimpleStrategy The replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy. cassandra.keyspace.replication [3] The keyspace replication factor of SimpleStrategy, like ‘[3]’.Or replicas in each datacenter of NetworkTopologyStrategy, like ‘[dc1:2,dc2:1]’. cassandra.username The username to use to login to cassandra cluster. cassandra.password The password corresponding to cassandra.username. cassandra.compression_type none The compression algorithm of cassandra transport: none/snappy/lz4. cassandra.jmx_port=7199 7199 The port of JMX API service for cassandra. cassandra.aggregation_timeout 43200 The timeout in seconds of waiting for aggregation.
ScyllaDB 后端配置项
config option default value description backend Must be set to scylladb
. serializer Must be set to scylladb
.
其它与 Cassandra 后端一致。
RocksDB 后端配置项
config option default value description backend Must be set to rocksdb
. serializer Must be set to binary
. rocksdb.data_disks [] The optimized disks for storing data of RocksDB. The format of each element: STORE/TABLE: /path/disk
.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap] rocksdb.data_path rocksdb-data The path for storing data of RocksDB. rocksdb.wal_path rocksdb-data The path for storing WAL of RocksDB. rocksdb.allow_mmap_reads false Allow the OS to mmap file for reading sst tables. rocksdb.allow_mmap_writes false Allow the OS to mmap file for writing. rocksdb.block_cache_capacity 8388608 The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache. rocksdb.bloom_filter_bits_per_key -1 The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter. rocksdb.bloom_filter_block_based_mode false Use block based filter rather than full filter. rocksdb.bloom_filter_whole_key_filtering true True if place whole keys in the bloom filter, else place the prefix of keys. rocksdb.bottommost_compression NO_COMPRESSION The compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd. rocksdb.bulkload_mode false Switch to the mode to bulk load data into RocksDB. rocksdb.cache_index_and_filter_blocks false Indicating if we’d put index/filter blocks to the block cache. rocksdb.compaction_style LEVEL Set compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO. rocksdb.compression SNAPPY_COMPRESSION The compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd. rocksdb.compression_per_level [NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION] The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd. rocksdb.delayed_write_rate 16777216 The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind. rocksdb.log_level INFO The info log level of RocksDB. rocksdb.max_background_jobs 8 Maximum number of concurrent background jobs, including flushes and compactions. rocksdb.level_compaction_dynamic_level_bytes false Whether to enable level_compaction_dynamic_level_bytes, if it’s enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it’s not recommended. rocksdb.max_bytes_for_level_base 536870912 The upper-bound of the total size of level-1 files in bytes. rocksdb.max_bytes_for_level_multiplier 10.0 The ratio between the total size of level (L+1) files and the total size of level L files for all L. rocksdb.max_open_files -1 The maximum number of open files that can be cached by RocksDB, -1 means no limit. rocksdb.max_subcompactions 4 The value represents the maximum number of threads per compaction job. rocksdb.max_write_buffer_number 6 The maximum number of write buffers that are built up in memory. rocksdb.max_write_buffer_number_to_maintain 0 The total maximum number of write buffers to maintain in memory. rocksdb.min_write_buffer_number_to_merge 2 The minimum number of write buffers that will be merged together. rocksdb.num_levels 7 Set the number of levels for this database. rocksdb.optimize_filters_for_hits false This flag allows us to not store filters for the last level. rocksdb.optimize_mode true Optimize for heavy workloads and big datasets. rocksdb.pin_l0_filter_and_index_blocks_in_cache false Indicating if we’d put index/filter blocks to the block cache. rocksdb.sst_path The path for ingesting SST file into RocksDB. rocksdb.target_file_size_base 67108864 The target file size for compaction in bytes. rocksdb.target_file_size_multiplier 1 The size ratio between a level L file and a level (L+1) file. rocksdb.use_direct_io_for_flush_and_compaction false Enable the OS to use direct read/writes in flush and compaction. rocksdb.use_direct_reads false Enable the OS to use direct I/O for reading sst tables. rocksdb.write_buffer_size 134217728 Amount of data in bytes to build up in memory. rocksdb.max_manifest_file_size 104857600 The max size of manifest file in bytes. rocksdb.skip_stats_update_on_db_open false Whether to skip statistics update when opening the database, setting this flag true allows us to not update statistics. rocksdb.max_file_opening_threads 16 The max number of threads used to open files. rocksdb.max_total_wal_size 0 Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit. rocksdb.db_write_buffer_size 0 Total size of write buffers in bytes across all column families, 0 means no limit. rocksdb.delete_obsolete_files_period 21600 The periodicity in seconds when obsolete files get deleted, 0 means always do full purge. rocksdb.hard_pending_compaction_bytes_limit 274877906944 The hard limit to impose on pending compaction in bytes. rocksdb.level0_file_num_compaction_trigger 2 Number of files to trigger level-0 compaction. rocksdb.level0_slowdown_writes_trigger 20 Soft limit on number of level-0 files for slowing down writes. rocksdb.level0_stop_writes_trigger 36 Hard limit on number of level-0 files for stopping writes. rocksdb.soft_pending_compaction_bytes_limit 68719476736 The soft limit to impose on pending compaction in bytes.
HBase 后端配置项
config option default value description backend Must be set to hbase
. serializer Must be set to hbase
. hbase.hosts localhost The hostnames or ip addresses of HBase zookeeper, separated with commas. hbase.port 2181 The port address of HBase zookeeper. hbase.threads_max 64 The max threads num of hbase connections. hbase.znode_parent /hbase The znode parent path of HBase zookeeper. hbase.zk_retry 3 The recovery retry times of HBase zookeeper. hbase.aggregation_timeout 43200 The timeout in seconds of waiting for aggregation. hbase.kerberos_enable false Is Kerberos authentication enabled for HBase. hbase.kerberos_keytab The HBase’s key tab file for kerberos authentication. hbase.kerberos_principal The HBase’s principal for kerberos authentication. hbase.krb5_conf etc/krb5.conf Kerberos configuration file, including KDC IP, default realm, etc. hbase.hbase_site /etc/hbase/conf/hbase-site.xml The HBase’s configuration file hbase.enable_partition true Is pre-split partitions enabled for HBase. hbase.vertex_partitions 10 The number of partitions of the HBase vertex table. hbase.edge_partitions 30 The number of partitions of the HBase edge table.
MySQL & PostgreSQL 后端配置项
config option default value description backend Must be set to mysql
. serializer Must be set to mysql
. jdbc.driver com.mysql.jdbc.Driver The JDBC driver class to connect database. jdbc.url jdbc:mysql://127.0.0.1:3306 The url of database in JDBC format. jdbc.username root The username to login database. jdbc.password ****** The password corresponding to jdbc.username. jdbc.ssl_mode false The SSL mode of connections with database. jdbc.reconnect_interval 3 The interval(seconds) between reconnections when the database connection fails. jdbc.reconnect_max_times 3 The reconnect times when the database connection fails. jdbc.storage_engine InnoDB The storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL. jdbc.postgresql.connect_database template1 The database used to connect when init store, drop store or check store exist.
PostgreSQL 后端配置项
config option default value description backend Must be set to postgresql
. serializer Must be set to postgresql
.
其它与 MySQL 后端一致。
PostgreSQL 后端的 driver 和 url 应该设置为:
jdbc.driver=org.postgresql.Driver
jdbc.url=jdbc:postgresql://localhost:5432/
3 - HugeGraph 内置用户权限与扩展权限配置及使用
概述
HugeGraph 为了方便不同用户场景下的鉴权使用,目前内置了两套权限模式:
- 简单的
ConfigAuthenticator
模式,通过本地配置文件存储用户名和密码 (仅支持单 GraphServer) - 完备的
StandardAuthenticator
模式,支持多用户认证、以及细粒度的权限访问控制,采用基于 “用户-用户组-操作-资源” 的 4 层设计,灵活控制用户角色与权限 (支持多 GraphServer)
其中 StandardAuthenticator
模式的几个核心设计:
- 初始化时创建超级管理员 (
admin
) 用户,后续通过超级管理员创建其它用户,新创建的用户被分配足够权限后,可以创建或管理更多的用户 - 支持动态创建用户、用户组、资源,支持动态分配或取消权限
- 用户可以属于一个或多个用户组,每个用户组可以拥有对任意个资源的操作权限,操作类型包括:读、写、删除、执行等种类
- “资源” 描述了图数据库中的数据,比如符合某一类条件的顶点,每一个资源包括
type
、label
、properties
三个要素,共有 18 种类型、任意 label、任意 properties 可组合形成的资源,一个资源的内部条件是且关系,多个资源之间的条件是或关系
举例说明:
// 场景:某用户只有北京地区的数据读取权限
@@ -211,23 +211,23 @@
authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: conf/rest-server.properties}
}
-
在配置文件rest-server.properties
中配置authenticator
及其graph_store
信息:
auth.authenticator=com.baidu.hugegraph.auth.StandardAuthenticator
-auth.graph_store=hugegraph
-
-# auth client config
-# 如果是分开部署 GraphServer 和 AuthServer, 还需要指定下面的配置, 地址填写 AuthServer 的 IP:RPC 端口
-#auth.remote_url=127.0.0.1:8899,127.0.0.1:8898,127.0.0.1:8897
-
其中,graph_store
配置项是指使用哪一个图来存储用户信息,如果存在多个图的话,选取任意一个均可。
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-
然后详细的权限 API 调用和说明请参考 Authentication-API 文档
ConfigAuthenticator模式
ConfigAuthenticator
模式是通过预先在配置文件中设置用户信息来支持用户认证,该实现是基于配置好的静态tokens
来验证用户是否合法。下面是具体的配置流程(重启服务生效):
在配置文件gremlin-server.yaml
中配置authenticator
及其rest-server
文件路径:
authentication: {
+
在配置文件rest-server.properties
中配置authenticator
及其graph_store
信息:
auth.authenticator=com.baidu.hugegraph.auth.StandardAuthenticator
+auth.graph_store=hugegraph
+
+# auth client config
+# 如果是分开部署 GraphServer 和 AuthServer, 还需要指定下面的配置, 地址填写 AuthServer 的 IP:RPC 端口
+#auth.remote_url=127.0.0.1:8899,127.0.0.1:8898,127.0.0.1:8897
+
其中,graph_store
配置项是指使用哪一个图来存储用户信息,如果存在多个图的话,选取任意一个均可。
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+
然后详细的权限 API 调用和说明请参考 Authentication-API 文档
ConfigAuthenticator模式
ConfigAuthenticator
模式是通过预先在配置文件中设置用户信息来支持用户认证,该实现是基于配置好的静态tokens
来验证用户是否合法。下面是具体的配置流程(重启服务生效):
在配置文件gremlin-server.yaml
中配置authenticator
及其rest-server
文件路径:
authentication: {
authenticator: com.baidu.hugegraph.auth.ConfigAuthenticator,
authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: conf/rest-server.properties}
}
-
在配置文件rest-server.properties
中配置authenticator
及其tokens
信息:
auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
-auth.admin_token=token-value-a
-auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
-
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-
自定义用户认证系统
如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口com.baidu.hugegraph.auth.HugeAuthenticator
即可,然后修改配置文件中authenticator
配置项指向该实现。
4 - 配置 HugeGraphServer 使用 https 协议
概述
HugeGraphServer 默认使用的是 http 协议,如果用户对请求的安全性有要求,可以配置成 https。
服务端配置
修改 conf/rest-server.properties 配置文件,将 restserver.url 的 schema 部分改为 https。
# 将协议设置为 https
+
在配置文件rest-server.properties
中配置authenticator
及其tokens
信息:
auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
+auth.admin_token=token-value-a
+auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
+
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+
自定义用户认证系统
如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口com.baidu.hugegraph.auth.HugeAuthenticator
即可,然后修改配置文件中authenticator
配置项指向该实现。
4 - 配置 HugeGraphServer 使用 https 协议
概述
HugeGraphServer 默认使用的是 http 协议,如果用户对请求的安全性有要求,可以配置成 https。
服务端配置
修改 conf/rest-server.properties 配置文件,将 restserver.url 的 schema 部分改为 https。
# 将协议设置为 https
restserver.url=https://127.0.0.1:8080
# 服务端 keystore 文件路径,当协议为 https 时该默认值自动生效,可按需修改此项
ssl.keystore_file=conf/hugegraph-server.keystore
@@ -258,13 +258,13 @@
# 执行迁移命令时,当 --target-url 中使用 https 协议时,默认值 hugegraph 自动生效,可按需修改
--target-trust-store-password {target-password}
hugegraph-tools 的 conf 目录下已经放了一个默认的客户端证书文件 hugegraph.truststore,其密码是 hugegraph。
如何生成证书文件
本部分给出生成证书的示例,如果默认的证书已经够用,或者已经知晓如何生成,可跳过。
服务端
- ⽣成服务端私钥,并且导⼊到服务端 keystore ⽂件中,server.keystore 是给服务端⽤的,其中保存着⾃⼰的私钥
keytool -genkey -alias serverkey -keyalg RSA -keystore server.keystore
-
过程中根据需求填写描述信息,默认证书的描述信息如下:
名字和姓⽒:hugegraph
-组织单位名称:hugegraph
-组织名称:hugegraph
-城市或区域名称:BJ
-州或省份名称:BJ
-国家代码:CN
-
- 根据服务端私钥,导出服务端证书
keytool -export -alias serverkey -keystore server.keystore -file server.crt
+
过程中根据需求填写描述信息,默认证书的描述信息如下:
名字和姓⽒:hugegraph
+组织单位名称:hugegraph
+组织名称:hugegraph
+城市或区域名称:BJ
+州或省份名称:BJ
+国家代码:CN
+
- 根据服务端私钥,导出服务端证书
keytool -export -alias serverkey -keystore server.keystore -file server.crt
server.crt 就是服务端的证书
客户端
keytool -import -alias serverkey -file server.crt -keystore client.truststore
client.truststore 是给客户端⽤的,其中保存着受信任的证书
5 - HugeGraph-Computer 配置
Computer Config Options
config option default value description algorithm.message_class org.apache.hugegraph.computer.core.config.Null The class of message passed when compute vertex. algorithm.params_class org.apache.hugegraph.computer.core.config.Null The class used to transfer algorithms’ parameters before algorithm been run. algorithm.result_class org.apache.hugegraph.computer.core.config.Null The class of vertex’s value, the instance is used to store computation result for the vertex. allocator.max_vertices_per_thread 10000 Maximum number of vertices per thread processed in each memory allocator bsp.etcd_endpoints http://localhost:2379 The end points to access etcd. bsp.log_interval 30000 The log interval(in ms) to print the log while waiting bsp event. bsp.max_super_step 10 The max super step of the algorithm. bsp.register_timeout 300000 The max timeout to wait for master and works to register. bsp.wait_master_timeout 86400000 The max timeout(in ms) to wait for master bsp event. bsp.wait_workers_timeout 86400000 The max timeout to wait for workers bsp event. hgkv.max_data_block_size 65536 The max byte size of hgkv-file data block. hgkv.max_file_size 2147483648 The max number of bytes in each hgkv-file. hgkv.max_merge_files 10 The max number of files to merge at one time. hgkv.temp_file_dir /tmp/hgkv This folder is used to store temporary files, temporary files will be generated during the file merging process. hugegraph.name hugegraph The graph name to load data and write results back. hugegraph.url http://127.0.0.1:8080 The hugegraph url to load data and write results back. input.edge_direction OUT The data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded. input.edge_freq MULTIPLE The frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it. input.filter_class org.apache.hugegraph.computer.core.input.filter.DefaultInputFilter The class to create input-filter object, input-filter is used to Filter vertex edges according to user needs. input.loader_schema_path The schema path of loader input, only takes effect when the input.source_type=loader is enabled input.loader_struct_path The struct path of loader input, only takes effect when the input.source_type=loader is enabled input.max_edges_in_one_vertex 200 The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit. input.source_type hugegraph-server The source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’. input.split_fetch_timeout 300 The timeout in seconds to fetch input splits input.split_max_splits 10000000 The maximum number of input splits input.split_page_size 500 The page size for streamed load input split data input.split_size 1048576 The input split size in bytes job.id local_0001 The job id on Yarn cluster or K8s cluster. job.partitions_count 1 The partitions count for computing one graph algorithm job. job.partitions_thread_nums 4 The number of threads for partition parallel compute. job.workers_count 1 The workers count for computing one graph algorithm job. master.computation_class org.apache.hugegraph.computer.core.master.DefaultMasterComputation Master-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master. output.batch_size 500 The batch size of output output.batch_threads 1 The threads number used to batch output output.hdfs_core_site_path The hdfs core site path. output.hdfs_delimiter , The delimiter of hdfs output. output.hdfs_kerberos_enable false Is Kerberos authentication enabled for Hdfs. output.hdfs_kerberos_keytab The Hdfs’s key tab file for kerberos authentication. output.hdfs_kerberos_principal The Hdfs’s principal for kerberos authentication. output.hdfs_krb5_conf /etc/krb5.conf Kerberos configuration file. output.hdfs_merge_partitions true Whether merge output files of multiple partitions. output.hdfs_path_prefix /hugegraph-computer/results The directory of hdfs output result. output.hdfs_replication 3 The replication number of hdfs. output.hdfs_site_path The hdfs site path. output.hdfs_url hdfs://127.0.0.1:9000 The hdfs url of output. output.hdfs_user hadoop The hdfs user of output. output.output_class org.apache.hugegraph.computer.core.output.LogOutput The class to output the computation result of each vertex. Be called after iteration computation. output.result_name value The value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS. output.result_write_type OLAP_COMMON The result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE]. output.retry_interval 10 The retry interval when output failed output.retry_times 3 The retry times when output failed output.single_threads 1 The threads number used to single output output.thread_pool_shutdown_timeout 60 The timeout seconds of output threads pool shutdown output.with_adjacent_edges false Output the adjacent edges of the vertex or not output.with_edge_properties false Output the properties of the edge or not output.with_vertex_properties false Output the properties of the vertex or not sort.thread_nums 4 The number of threads performing internal sorting. transport.client_connect_timeout 3000 The timeout(in ms) of client connect to server. transport.client_threads 4 The number of transport threads for client. transport.close_timeout 10000 The timeout(in ms) of close server or close client. transport.finish_session_timeout 0 The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests). transport.heartbeat_interval 20000 The minimum interval(in ms) between heartbeats on client side. transport.io_mode AUTO The network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically. transport.max_pending_requests 8 The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests. transport.max_syn_backlog 511 The capacity of SYN queue on server side, 0 means using system default value. transport.max_timeout_heartbeat_count 120 The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side. transport.min_ack_interval 200 The minimum interval(in ms) of server reply ack. transport.min_pending_requests 6 The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests. transport.network_retries 3 The number of retry attempts for network communication,if network unstable. transport.provider_class org.apache.hugegraph.computer.core.network.netty.NettyTransportProvider The transport provider, currently only supports Netty. transport.receive_buffer_size 0 The size of socket receive-buffer in bytes, 0 means using system default value. transport.recv_file_mode true Whether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable. transport.send_buffer_size 0 The size of socket send-buffer in bytes, 0 means using system default value. transport.server_host 127.0.0.1 The server hostname or ip to listen on to transfer data. transport.server_idle_timeout 360000 The max timeout(in ms) of server idle. transport.server_port 0 The server port to listen on to transfer data. The system will assign a random port if it’s set to 0. transport.server_threads 4 The number of transport threads for server. transport.sync_request_timeout 10000 The timeout(in ms) to wait response after sending sync-request. transport.tcp_keep_alive true Whether enable TCP keep-alive. transport.transport_epoll_lt false Whether enable EPOLL level-trigger. transport.write_buffer_high_mark 67108864 The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark. transport.write_buffer_low_mark 33554432 The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b transport.write_socket_timeout 3000 The timeout(in ms) to write data to socket buffer. valuefile.max_segment_size 1073741824 The max number of bytes in each segment of value-file. worker.combiner_class org.apache.hugegraph.computer.core.config.Null Combiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value. worker.computation_class org.apache.hugegraph.computer.core.config.Null The class to create worker-computation object, worker-computation is used to compute each vertex in each superstep. worker.data_dirs [jobs] The directories separated by ‘,’ that received vertices and messages can persist into. worker.edge_properties_combiner_class org.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombiner The combiner can combine several properties of the same edge into one properties at inputstep. worker.partitioner org.apache.hugegraph.computer.core.graph.partition.HashPartitioner The partitioner that decides which partition a vertex should be in, and which worker a partition should be in. worker.received_buffers_bytes_limit 104857600 The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file. worker.vertex_properties_combiner_class org.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombiner The combiner can combine several properties of the same vertex into one properties at inputstep. worker.wait_finish_messages_timeout 86400000 The max timeout(in ms) message-handler wait for finish-message of all workers. worker.wait_sort_timeout 600000 The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers. worker.write_buffer_capacity 52428800 The initial size of write buffer that used to store vertex or message. worker.write_buffer_threshold 52428800 The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.
K8s Operator Config Options
NOTE: Option needs to be converted through environment variable settings, e.g. k8s.internal_etcd_url => INTERNAL_ETCD_URL
config option default value description k8s.auto_destroy_pod true Whether to automatically destroy all pods when the job is completed or failed. k8s.close_reconciler_timeout 120 The max timeout(in ms) to close reconciler. k8s.internal_etcd_url http://127.0.0.1:2379 The internal etcd url for operator system. k8s.max_reconcile_retry 3 The max retry times of reconcile. k8s.probe_backlog 50 The maximum backlog for serving health probes. k8s.probe_port 9892 The value is the port that the controller bind to for serving health probes. k8s.ready_check_internal 1000 The time interval(ms) of check ready. k8s.ready_timeout 30000 The max timeout(in ms) of check ready. k8s.reconciler_count 10 The max number of reconciler thread. k8s.resync_period 600000 The minimum frequency at which watched resources are reconciled. k8s.timezone Asia/Shanghai The timezone of computer job and operator. k8s.watch_namespace hugegraph-computer-system The value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.
HugeGraph-Computer CRD
spec default value description required algorithmName The name of algorithm. true jobId The job id. true image The image of algorithm. true computerConf The map of computer config options. true workerInstances The number of worker instances, it will instead the ‘job.workers_count’ option. true pullPolicy Always The pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy false pullSecrets The pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod false masterCpu The cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu false workerCpu The cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu false masterMemory The memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory false workerMemory The memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory false log4jXml The content of log4j.xml for computer job. false jarFile The jar path of computer algorithm. false remoteJarUri The remote jar uri of computer algorithm, it will overlay algorithm image. false jvmOptions The java startup parameters of computer job. false envVars please refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/ false envFrom please refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/ false masterCommand bin/start-computer.sh The run command of master, equivalent to ‘Entrypoint’ field of Docker. false masterArgs ["-r master", “-d k8s”] The run args of master, equivalent to ‘Cmd’ field of Docker. false workerCommand bin/start-computer.sh The run command of worker, equivalent to ‘Entrypoint’ field of Docker. false workerArgs ["-r worker", “-d k8s”] The run args of worker, equivalent to ‘Cmd’ field of Docker. false volumes Please refer to: https://kubernetes.io/docs/concepts/storage/volumes/ false volumeMounts Please refer to: https://kubernetes.io/docs/concepts/storage/volumes/ false secretPaths The map of k8s-secret name and mount path. false configMapPaths The map of k8s-configmap name and mount path. false podTemplateSpec Please refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpec false securityContext Please refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ false
KubeDriver Config Options
config option default value description k8s.build_image_bash_path The path of command used to build image. k8s.enable_internal_algorithm true Whether enable internal algorithm. k8s.framework_image_url hugegraph/hugegraph-computer:latest The image url of computer framework. k8s.image_repository_password The password for login image repository. k8s.image_repository_registry The address for login image repository. k8s.image_repository_url hugegraph/hugegraph-computer The url of image repository. k8s.image_repository_username The username for login image repository. k8s.internal_algorithm [pageRank] The name list of all internal algorithm. k8s.internal_algorithm_image_url hugegraph/hugegraph-computer:latest The image url of internal algorithm. k8s.jar_file_dir /cache/jars/ The directory where the algorithm jar to upload location. k8s.kube_config ~/.kube/config The path of k8s config file. k8s.log4j_xml_path The log4j.xml path for computer job. k8s.namespace hugegraph-computer-system The namespace of hugegraph-computer system. k8s.pull_secret_names [] The names of pull-secret for pulling image.
diff --git a/cn/docs/config/config-authentication/index.html b/cn/docs/config/config-authentication/index.html
index 4ccdc2070..ae56df7f7 100644
--- a/cn/docs/config/config-authentication/index.html
+++ b/cn/docs/config/config-authentication/index.html
@@ -38,23 +38,23 @@
authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: conf/rest-server.properties}
}
-在配置文件rest-server.properties
中配置authenticator
及其graph_store
信息:
auth.authenticator=com.baidu.hugegraph.auth.StandardAuthenticator
-auth.graph_store=hugegraph
-
-# auth client config
-# 如果是分开部署 GraphServer 和 AuthServer, 还需要指定下面的配置, 地址填写 AuthServer 的 IP:RPC 端口
-#auth.remote_url=127.0.0.1:8899,127.0.0.1:8898,127.0.0.1:8897
-
其中,graph_store
配置项是指使用哪一个图来存储用户信息,如果存在多个图的话,选取任意一个均可。
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-
然后详细的权限 API 调用和说明请参考 Authentication-API 文档
ConfigAuthenticator模式
ConfigAuthenticator
模式是通过预先在配置文件中设置用户信息来支持用户认证,该实现是基于配置好的静态tokens
来验证用户是否合法。下面是具体的配置流程(重启服务生效):
在配置文件gremlin-server.yaml
中配置authenticator
及其rest-server
文件路径:
authentication: {
+
在配置文件rest-server.properties
中配置authenticator
及其graph_store
信息:
auth.authenticator=com.baidu.hugegraph.auth.StandardAuthenticator
+auth.graph_store=hugegraph
+
+# auth client config
+# 如果是分开部署 GraphServer 和 AuthServer, 还需要指定下面的配置, 地址填写 AuthServer 的 IP:RPC 端口
+#auth.remote_url=127.0.0.1:8899,127.0.0.1:8898,127.0.0.1:8897
+
其中,graph_store
配置项是指使用哪一个图来存储用户信息,如果存在多个图的话,选取任意一个均可。
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+
然后详细的权限 API 调用和说明请参考 Authentication-API 文档
ConfigAuthenticator模式
ConfigAuthenticator
模式是通过预先在配置文件中设置用户信息来支持用户认证,该实现是基于配置好的静态tokens
来验证用户是否合法。下面是具体的配置流程(重启服务生效):
在配置文件gremlin-server.yaml
中配置authenticator
及其rest-server
文件路径:
authentication: {
authenticator: com.baidu.hugegraph.auth.ConfigAuthenticator,
authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: conf/rest-server.properties}
}
-
在配置文件rest-server.properties
中配置authenticator
及其tokens
信息:
auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
-auth.admin_token=token-value-a
-auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
-
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-
自定义用户认证系统
如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口com.baidu.hugegraph.auth.HugeAuthenticator
即可,然后修改配置文件中authenticator
配置项指向该实现。
Last modified April 17, 2022: rebuild doc (ef36544)
+在配置文件rest-server.properties
中配置authenticator
及其tokens
信息:
auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
+auth.admin_token=token-value-a
+auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
+
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+
自定义用户认证系统
如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口com.baidu.hugegraph.auth.HugeAuthenticator
即可,然后修改配置文件中authenticator
配置项指向该实现。
Last modified April 17, 2022: rebuild doc (ef36544)
diff --git a/cn/docs/config/config-guide/index.html b/cn/docs/config/config-guide/index.html
index 3d78b6a91..6cb0b9b25 100644
--- a/cn/docs/config/config-guide/index.html
+++ b/cn/docs/config/config-guide/index.html
@@ -6,12 +6,12 @@
HugeGraphServer 内部集成了 GremlinServer 和 RestServer,而 gremlin-server.yaml 和 rest-server.properties 就是用来配置这两个Server的。
GremlinServer:GremlinServer接受用户的gremlin语句,解析后转而调用Core的代码。 RestServer:提供RESTful API,根据不同的HTTP请求,调用对应的Core API,如果用户请求体是gremlin语句,则会转发给GremlinServer,实现对图数据的操作。 下面对这三个配置文件逐一介绍。
2 gremlin-server.yaml gremlin-server.yaml 文件默认的内容如下:
-# host and port of gremlin server, need to be consistent with host and port in rest-server.properties #host: 127.0.0.1 #port: 8182 # timeout in ms of gremlin query scriptEvaluationTimeout: 30000 channelizer: org.apache.tinkerpop.gremlin.server.channel.WsAndHttpChannelizer graphs: { hugegraph: conf/hugegraph.properties } scriptEngines: { gremlin-groovy: { plugins: { com.baidu.hugegraph.plugin.HugeGraphGremlinPlugin: {}, org.">http://localhost:8080/graphs/hugegraph/conf?token=162f7848-0b6d-4faf-b557-3a0797869c555 多图配置
我们的系统是可以存在多个图的,并且各个图的后端可以不一样,比如图 hugegraph 和 hugegraph1,其中 hugegraph 以 cassandra 作为后端,hugegraph1 以 rocksdb作为后端。
配置方法也很简单:
修改 gremlin-server.yaml
在 gremlin-server.yaml 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs: {
+推荐使用 HTTP 的通信方式,HugeGraph 的外围组件都是基于 HTTP 实现的;默认GremlinServer是服务在 localhost:8182,如果需要修改,配置 host、port 即可
- host:部署 GremlinServer 机器的机器名或 IP,目前 HugeGraphServer 不支持分布式部署,且GremlinServer不直接暴露给用户;
- port:部署 GremlinServer 机器的端口;
同时需要在 rest-server.properties 中增加对应的配置项 gremlinserver.url=http://host:port
3 rest-server.properties
rest-server.properties 文件的默认内容如下:
# bind url
+restserver.url=http://127.0.0.1:8080
+# gremlin server url, need to be consistent with host and port in gremlin-server.yaml
+#gremlinserver.url=http://127.0.0.1:8182
+
+# graphs list with pair NAME:CONF_PATH
+graphs=[hugegraph:conf/hugegraph.properties]
+
+# authentication
+#auth.authenticator=
+#auth.admin_token=
+#auth.user_tokens=[]
+
+server.id=server-1
+server.role=master
+
- restserver.url:RestServer 提供服务的 url,根据实际环境修改;
- graphs:RestServer 启动时也需要打开图,该项为 map 结构,key 是图的名字,value 是该图的配置文件路径;
注意:gremlin-server.yaml 和 rest-server.properties 都包含 graphs 配置项,而 init-store
命令是根据 gremlin-server.yaml 的 graphs 下的图进行初始化的。
配置项 gremlinserver.url 是 GremlinServer 为 RestServer 提供服务的 url,该配置项默认为 http://localhost:8182,如需修改,需要和 gremlin-server.yaml 中的 host 和 port 相匹配;
4 hugegraph.properties
hugegraph.properties 是一类文件,因为如果系统存在多个图,则会有多个相似的文件。该文件用来配置与图存储和查询相关的参数,文件的默认内容如下:
# gremlin entrence to create graph
+gremlin.graph=com.baidu.hugegraph.HugeFactory
+
+# cache config
+#schema.cache_capacity=100000
+# vertex-cache default is 1000w, 10min expired
+#vertex.cache_capacity=10000000
+#vertex.cache_expire=600
+# edge-cache default is 100w, 10min expired
+#edge.cache_capacity=1000000
+#edge.cache_expire=600
+
+# schema illegal name template
+#schema.illegal_name_regex=\s+|~.*
+
+#vertex.default_label=vertex
+
+backend=rocksdb
+serializer=binary
+
+store=hugegraph
+
+raft.mode=false
+raft.safe_read=false
+raft.use_snapshot=false
+raft.endpoint=127.0.0.1:8281
+raft.group_peers=127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283
+raft.path=./raft-log
+raft.use_replicator_pipeline=true
+raft.election_timeout=10000
+raft.snapshot_interval=3600
+raft.backend_threads=48
+raft.read_index_threads=8
+raft.queue_size=16384
+raft.queue_publish_timeout=60
+raft.apply_batch=1
+raft.rpc_threads=80
+raft.rpc_connect_timeout=5000
+raft.rpc_timeout=60000
+
+# if use 'ikanalyzer', need download jar from 'https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory
+search.text_analyzer=jieba
+search.text_analyzer_mode=INDEX
+
+# rocksdb backend config
+#rocksdb.data_path=/path/to/disk
+#rocksdb.wal_path=/path/to/disk
+
+# cassandra backend config
+cassandra.host=localhost
+cassandra.port=9042
+cassandra.username=
+cassandra.password=
+#cassandra.connect_timeout=5
+#cassandra.read_timeout=20
+#cassandra.keyspace.strategy=SimpleStrategy
+#cassandra.keyspace.replication=3
+
+# hbase backend config
+#hbase.hosts=localhost
+#hbase.port=2181
+#hbase.znode_parent=/hbase
+#hbase.threads_max=64
+
+# mysql backend config
+#jdbc.driver=com.mysql.jdbc.Driver
+#jdbc.url=jdbc:mysql://127.0.0.1:3306
+#jdbc.username=root
+#jdbc.password=
+#jdbc.reconnect_max_times=3
+#jdbc.reconnect_interval=3
+#jdbc.sslmode=false
+
+# postgresql & cockroachdb backend config
+#jdbc.driver=org.postgresql.Driver
+#jdbc.url=jdbc:postgresql://localhost:5432/
+#jdbc.username=postgres
+#jdbc.password=
+
+# palo backend config
+#palo.host=127.0.0.1
+#palo.poll_interval=10
+#palo.temp_dir=./palo-data
+#palo.file_limit_size=32
+
重点关注未注释的几项:
- gremlin.graph:GremlinServer 的启动入口,用户不要修改此项;
- backend:使用的后端存储,可选值有 memory、cassandra、scylladb、mysql、hbase、postgresql 和 rocksdb;
- serializer:主要为内部使用,用于将 schema、vertex 和 edge 序列化到后端,对应的可选值为 text、cassandra、scylladb 和 binary;(注:rocksdb后端值需是binary,其他后端backend与serializer值需保持一致,如hbase后端该值为hbase)
- store:图存储到后端使用的数据库名,在 cassandra 和 scylladb 中就是 keyspace 名,此项的值与 GremlinServer 和 RestServer 中的图名并无关系,但是出于直观考虑,建议仍然使用相同的名字;
- cassandra.host:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的 seeds;
- cassandra.port:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的 native port;
- rocksdb.data_path:backend 为 rocksdb 时此项才有意义,rocksdb 的数据目录
- rocksdb.wal_path:backend 为 rocksdb 时此项才有意义,rocksdb 的日志目录
- admin.token: 通过一个token来获取服务器的配置信息,例如:http://localhost:8080/graphs/hugegraph/conf?token=162f7848-0b6d-4faf-b557-3a0797869c55
5 多图配置
我们的系统是可以存在多个图的,并且各个图的后端可以不一样,比如图 hugegraph 和 hugegraph1,其中 hugegraph 以 cassandra 作为后端,hugegraph1 以 rocksdb作为后端。
配置方法也很简单:
修改 gremlin-server.yaml
在 gremlin-server.yaml 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs: {
hugegraph: conf/hugegraph.properties,
hugegraph1: conf/hugegraph1.properties
}
-
修改 rest-server.properties
在 rest-server.properties 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs=[hugegraph:conf/hugegraph.properties, hugegraph1:conf/hugegraph1.properties]
-
添加 hugegraph1.properties
拷贝 hugegraph.properties,命名为 hugegraph1.properties,修改图对应的数据库名以及关于后端部分的参数,比如:
store=hugegraph1
-
-...
-
-backend=rocksdb
-serializer=binary
-
停止 Server,初始化执行 init-store.sh(为新的图创建数据库),重新启动 Server
$ bin/stop-hugegraph.sh
+
修改 rest-server.properties
在 rest-server.properties 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs=[hugegraph:conf/hugegraph.properties, hugegraph1:conf/hugegraph1.properties]
+
添加 hugegraph1.properties
拷贝 hugegraph.properties,命名为 hugegraph1.properties,修改图对应的数据库名以及关于后端部分的参数,比如:
store=hugegraph1
+
+...
+
+backend=rocksdb
+serializer=binary
+
停止 Server,初始化执行 init-store.sh(为新的图创建数据库),重新启动 Server
$ bin/stop-hugegraph.sh
$ bin/init-store.sh
$ bin/start-hugegraph.sh
-
Last modified February 8, 2023: fix: ik jar need downlaod by user due to license problem (#191) (c844015)
+
Last modified May 10, 2023: docs: update config-guide.md (#212) (88cbc8a)
diff --git a/cn/docs/config/config-https/index.html b/cn/docs/config/config-https/index.html
index f8067b936..c11f009a9 100644
--- a/cn/docs/config/config-https/index.html
+++ b/cn/docs/config/config-https/index.html
@@ -51,13 +51,13 @@
# 执行迁移命令时,当 --target-url 中使用 https 协议时,默认值 hugegraph 自动生效,可按需修改
--target-trust-store-password {target-password}
hugegraph-tools 的 conf 目录下已经放了一个默认的客户端证书文件 hugegraph.truststore,其密码是 hugegraph。
如何生成证书文件
本部分给出生成证书的示例,如果默认的证书已经够用,或者已经知晓如何生成,可跳过。
服务端
- ⽣成服务端私钥,并且导⼊到服务端 keystore ⽂件中,server.keystore 是给服务端⽤的,其中保存着⾃⼰的私钥
keytool -genkey -alias serverkey -keyalg RSA -keystore server.keystore
-
过程中根据需求填写描述信息,默认证书的描述信息如下:
名字和姓⽒:hugegraph
-组织单位名称:hugegraph
-组织名称:hugegraph
-城市或区域名称:BJ
-州或省份名称:BJ
-国家代码:CN
-
- 根据服务端私钥,导出服务端证书
keytool -export -alias serverkey -keystore server.keystore -file server.crt
+
过程中根据需求填写描述信息,默认证书的描述信息如下:
名字和姓⽒:hugegraph
+组织单位名称:hugegraph
+组织名称:hugegraph
+城市或区域名称:BJ
+州或省份名称:BJ
+国家代码:CN
+
- 根据服务端私钥,导出服务端证书
keytool -export -alias serverkey -keystore server.keystore -file server.crt
server.crt 就是服务端的证书
客户端
keytool -import -alias serverkey -file server.crt -keystore client.truststore
client.truststore 是给客户端⽤的,其中保存着受信任的证书
Last modified April 17, 2022: rebuild doc (ef36544)
diff --git a/cn/docs/config/index.xml b/cn/docs/config/index.xml
index 054ca12ad..66f604b03 100644
--- a/cn/docs/config/index.xml
+++ b/cn/docs/config/index.xml
@@ -112,19 +112,22 @@
<p>同时需要在 rest-server.properties 中增加对应的配置项 gremlinserver.url=http://host:port</p>
<h3 id="3-rest-serverproperties">3 rest-server.properties</h3>
<p>rest-server.properties 文件的默认内容如下:</p>
-<pre tabindex="0"><code class="language-properties" data-lang="properties"># bind url
-restserver.url=http://127.0.0.1:8080
-# gremlin server url, need to be consistent with host and port in gremlin-server.yaml
-#gremlinserver.url=http://127.0.0.1:8182
-# graphs list with pair NAME:CONF_PATH
-graphs=[hugegraph:conf/hugegraph.properties]
-# authentication
-#auth.authenticator=
-#auth.admin_token=
-#auth.user_tokens=[]
-server.id=server-1
-server.role=master
-</code></pre><ul>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span># bind url
+</span></span><span style="display:flex;"><span>restserver.url=http://127.0.0.1:8080
+</span></span><span style="display:flex;"><span># gremlin server url, need to be consistent with host and port in gremlin-server.yaml
+</span></span><span style="display:flex;"><span>#gremlinserver.url=http://127.0.0.1:8182
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># graphs list with pair NAME:CONF_PATH
+</span></span><span style="display:flex;"><span>graphs=[hugegraph:conf/hugegraph.properties]
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># authentication
+</span></span><span style="display:flex;"><span>#auth.authenticator=
+</span></span><span style="display:flex;"><span>#auth.admin_token=
+</span></span><span style="display:flex;"><span>#auth.user_tokens=[]
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span>server.id=server-1
+</span></span><span style="display:flex;"><span>server.role=master
+</span></span></code></pre></div><ul>
<li>restserver.url:RestServer 提供服务的 url,根据实际环境修改;</li>
<li>graphs:RestServer 启动时也需要打开图,该项为 map 结构,key 是图的名字,value 是该图的配置文件路径;</li>
</ul>
@@ -136,81 +139,94 @@ server.role=master
</blockquote>
<h3 id="4-hugegraphproperties">4 hugegraph.properties</h3>
<p>hugegraph.properties 是一类文件,因为如果系统存在多个图,则会有多个相似的文件。该文件用来配置与图存储和查询相关的参数,文件的默认内容如下:</p>
-<pre tabindex="0"><code class="language-properties" data-lang="properties"># gremlin entrence to create graph
-gremlin.graph=com.baidu.hugegraph.HugeFactory
-# cache config
-#schema.cache_capacity=100000
-# vertex-cache default is 1000w, 10min expired
-#vertex.cache_capacity=10000000
-#vertex.cache_expire=600
-# edge-cache default is 100w, 10min expired
-#edge.cache_capacity=1000000
-#edge.cache_expire=600
-# schema illegal name template
-#schema.illegal_name_regex=\s+|~.*
-#vertex.default_label=vertex
-backend=rocksdb
-serializer=binary
-store=hugegraph
-raft.mode=false
-raft.safe_read=false
-raft.use_snapshot=false
-raft.endpoint=127.0.0.1:8281
-raft.group_peers=127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283
-raft.path=./raft-log
-raft.use_replicator_pipeline=true
-raft.election_timeout=10000
-raft.snapshot_interval=3600
-raft.backend_threads=48
-raft.read_index_threads=8
-raft.queue_size=16384
-raft.queue_publish_timeout=60
-raft.apply_batch=1
-raft.rpc_threads=80
-raft.rpc_connect_timeout=5000
-raft.rpc_timeout=60000
-# if use 'ikanalyzer', need download jar from 'https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory
-search.text_analyzer=jieba
-search.text_analyzer_mode=INDEX
-# rocksdb backend config
-#rocksdb.data_path=/path/to/disk
-#rocksdb.wal_path=/path/to/disk
-# cassandra backend config
-cassandra.host=localhost
-cassandra.port=9042
-cassandra.username=
-cassandra.password=
-#cassandra.connect_timeout=5
-#cassandra.read_timeout=20
-#cassandra.keyspace.strategy=SimpleStrategy
-#cassandra.keyspace.replication=3
-# hbase backend config
-#hbase.hosts=localhost
-#hbase.port=2181
-#hbase.znode_parent=/hbase
-#hbase.threads_max=64
-# mysql backend config
-#jdbc.driver=com.mysql.jdbc.Driver
-#jdbc.url=jdbc:mysql://127.0.0.1:3306
-#jdbc.username=root
-#jdbc.password=
-#jdbc.reconnect_max_times=3
-#jdbc.reconnect_interval=3
-#jdbc.sslmode=false
-# postgresql & cockroachdb backend config
-#jdbc.driver=org.postgresql.Driver
-#jdbc.url=jdbc:postgresql://localhost:5432/
-#jdbc.username=postgres
-#jdbc.password=
-# palo backend config
-#palo.host=127.0.0.1
-#palo.poll_interval=10
-#palo.temp_dir=./palo-data
-#palo.file_limit_size=32
-</code></pre><p>重点关注未注释的几项:</p>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span># gremlin entrence to create graph
+</span></span><span style="display:flex;"><span>gremlin.graph=com.baidu.hugegraph.HugeFactory
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># cache config
+</span></span><span style="display:flex;"><span>#schema.cache_capacity=100000
+</span></span><span style="display:flex;"><span># vertex-cache default is 1000w, 10min expired
+</span></span><span style="display:flex;"><span>#vertex.cache_capacity=10000000
+</span></span><span style="display:flex;"><span>#vertex.cache_expire=600
+</span></span><span style="display:flex;"><span># edge-cache default is 100w, 10min expired
+</span></span><span style="display:flex;"><span>#edge.cache_capacity=1000000
+</span></span><span style="display:flex;"><span>#edge.cache_expire=600
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># schema illegal name template
+</span></span><span style="display:flex;"><span>#schema.illegal_name_regex=\s+|~.*
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span>#vertex.default_label=vertex
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span>backend=rocksdb
+</span></span><span style="display:flex;"><span>serializer=binary
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span>store=hugegraph
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span>raft.mode=false
+</span></span><span style="display:flex;"><span>raft.safe_read=false
+</span></span><span style="display:flex;"><span>raft.use_snapshot=false
+</span></span><span style="display:flex;"><span>raft.endpoint=127.0.0.1:8281
+</span></span><span style="display:flex;"><span>raft.group_peers=127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283
+</span></span><span style="display:flex;"><span>raft.path=./raft-log
+</span></span><span style="display:flex;"><span>raft.use_replicator_pipeline=true
+</span></span><span style="display:flex;"><span>raft.election_timeout=10000
+</span></span><span style="display:flex;"><span>raft.snapshot_interval=3600
+</span></span><span style="display:flex;"><span>raft.backend_threads=48
+</span></span><span style="display:flex;"><span>raft.read_index_threads=8
+</span></span><span style="display:flex;"><span>raft.queue_size=16384
+</span></span><span style="display:flex;"><span>raft.queue_publish_timeout=60
+</span></span><span style="display:flex;"><span>raft.apply_batch=1
+</span></span><span style="display:flex;"><span>raft.rpc_threads=80
+</span></span><span style="display:flex;"><span>raft.rpc_connect_timeout=5000
+</span></span><span style="display:flex;"><span>raft.rpc_timeout=60000
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># if use 'ikanalyzer', need download jar from 'https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory
+</span></span><span style="display:flex;"><span>search.text_analyzer=jieba
+</span></span><span style="display:flex;"><span>search.text_analyzer_mode=INDEX
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># rocksdb backend config
+</span></span><span style="display:flex;"><span>#rocksdb.data_path=/path/to/disk
+</span></span><span style="display:flex;"><span>#rocksdb.wal_path=/path/to/disk
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># cassandra backend config
+</span></span><span style="display:flex;"><span>cassandra.host=localhost
+</span></span><span style="display:flex;"><span>cassandra.port=9042
+</span></span><span style="display:flex;"><span>cassandra.username=
+</span></span><span style="display:flex;"><span>cassandra.password=
+</span></span><span style="display:flex;"><span>#cassandra.connect_timeout=5
+</span></span><span style="display:flex;"><span>#cassandra.read_timeout=20
+</span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy
+</span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># hbase backend config
+</span></span><span style="display:flex;"><span>#hbase.hosts=localhost
+</span></span><span style="display:flex;"><span>#hbase.port=2181
+</span></span><span style="display:flex;"><span>#hbase.znode_parent=/hbase
+</span></span><span style="display:flex;"><span>#hbase.threads_max=64
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># mysql backend config
+</span></span><span style="display:flex;"><span>#jdbc.driver=com.mysql.jdbc.Driver
+</span></span><span style="display:flex;"><span>#jdbc.url=jdbc:mysql://127.0.0.1:3306
+</span></span><span style="display:flex;"><span>#jdbc.username=root
+</span></span><span style="display:flex;"><span>#jdbc.password=
+</span></span><span style="display:flex;"><span>#jdbc.reconnect_max_times=3
+</span></span><span style="display:flex;"><span>#jdbc.reconnect_interval=3
+</span></span><span style="display:flex;"><span>#jdbc.sslmode=false
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># postgresql & cockroachdb backend config
+</span></span><span style="display:flex;"><span>#jdbc.driver=org.postgresql.Driver
+</span></span><span style="display:flex;"><span>#jdbc.url=jdbc:postgresql://localhost:5432/
+</span></span><span style="display:flex;"><span>#jdbc.username=postgres
+</span></span><span style="display:flex;"><span>#jdbc.password=
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># palo backend config
+</span></span><span style="display:flex;"><span>#palo.host=127.0.0.1
+</span></span><span style="display:flex;"><span>#palo.poll_interval=10
+</span></span><span style="display:flex;"><span>#palo.temp_dir=./palo-data
+</span></span><span style="display:flex;"><span>#palo.file_limit_size=32
+</span></span></code></pre></div><p>重点关注未注释的几项:</p>
<ul>
<li>gremlin.graph:GremlinServer 的启动入口,用户不要修改此项;</li>
-<li>backend:使用的后端存储,可选值有 memory、cassandra、scylladb 和 rocksdb;</li>
+<li>backend:使用的后端存储,可选值有 memory、cassandra、scylladb、mysql、hbase、postgresql 和 rocksdb;</li>
<li>serializer:主要为内部使用,用于将 schema、vertex 和 edge 序列化到后端,对应的可选值为 text、cassandra、scylladb 和 binary;(注:rocksdb后端值需是binary,其他后端backend与serializer值需保持一致,如hbase后端该值为hbase)</li>
<li>store:图存储到后端使用的数据库名,在 cassandra 和 scylladb 中就是 keyspace 名,此项的值与 GremlinServer 和 RestServer 中的图名并无关系,但是出于直观考虑,建议仍然使用相同的名字;</li>
<li>cassandra.host:backend 为 cassandra 或 scylladb 时此项才有意义,cassandra/scylladb 集群的 seeds;</li>
@@ -230,14 +246,16 @@ cassandra.password=
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"></span>}<span style="color:#f8f8f8;text-decoration:underline">
</span></span></span></code></pre></div><p><strong>修改 rest-server.properties</strong></p>
<p>在 rest-server.properties 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:</p>
-<pre tabindex="0"><code class="language-properties" data-lang="properties">graphs=[hugegraph:conf/hugegraph.properties, hugegraph1:conf/hugegraph1.properties]
-</code></pre><p><strong>添加 hugegraph1.properties</strong></p>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>graphs=[hugegraph:conf/hugegraph.properties, hugegraph1:conf/hugegraph1.properties]
+</span></span></code></pre></div><p><strong>添加 hugegraph1.properties</strong></p>
<p>拷贝 hugegraph.properties,命名为 hugegraph1.properties,修改图对应的数据库名以及关于后端部分的参数,比如:</p>
-<pre tabindex="0"><code class="language-properties" data-lang="properties">store=hugegraph1
-...
-backend=rocksdb
-serializer=binary
-</code></pre><p><strong>停止 Server,初始化执行 init-store.sh(为新的图创建数据库),重新启动 Server</strong></p>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>store=hugegraph1
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span>...
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span>backend=rocksdb
+</span></span><span style="display:flex;"><span>serializer=binary
+</span></span></code></pre></div><p><strong>停止 Server,初始化执行 init-store.sh(为新的图创建数据库),重新启动 Server</strong></p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>$ bin/stop-hugegraph.sh
</span></span><span style="display:flex;"><span>$ bin/init-store.sh
</span></span><span style="display:flex;"><span>$ bin/start-hugegraph.sh
@@ -1454,15 +1472,16 @@ serializer=binary
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">config</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span>{<span style="color:#204a87;font-weight:bold">tokens</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">conf/rest-server.properties}</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"></span>}<span style="color:#f8f8f8;text-decoration:underline">
</span></span></span></code></pre></div><p>在配置文件<code>rest-server.properties</code>中配置<code>authenticator</code>及其<code>graph_store</code>信息:</p>
-<pre tabindex="0"><code class="language-properties" data-lang="properties">auth.authenticator=com.baidu.hugegraph.auth.StandardAuthenticator
-auth.graph_store=hugegraph
-# auth client config
-# 如果是分开部署 GraphServer 和 AuthServer, 还需要指定下面的配置, 地址填写 AuthServer 的 IP:RPC 端口
-#auth.remote_url=127.0.0.1:8899,127.0.0.1:8898,127.0.0.1:8897
-</code></pre><p>其中,<code>graph_store</code>配置项是指使用哪一个图来存储用户信息,如果存在多个图的话,选取任意一个均可。</p>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>auth.authenticator=com.baidu.hugegraph.auth.StandardAuthenticator
+</span></span><span style="display:flex;"><span>auth.graph_store=hugegraph
+</span></span><span style="display:flex;"><span>
+</span></span><span style="display:flex;"><span># auth client config
+</span></span><span style="display:flex;"><span># 如果是分开部署 GraphServer 和 AuthServer, 还需要指定下面的配置, 地址填写 AuthServer 的 IP:RPC 端口
+</span></span><span style="display:flex;"><span>#auth.remote_url=127.0.0.1:8899,127.0.0.1:8898,127.0.0.1:8897
+</span></span></code></pre></div><p>其中,<code>graph_store</code>配置项是指使用哪一个图来存储用户信息,如果存在多个图的话,选取任意一个均可。</p>
<p>在配置文件<code>hugegraph{n}.properties</code>中配置<code>gremlin.graph</code>信息:</p>
-<pre tabindex="0"><code class="language-properties" data-lang="properties">gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-</code></pre><p>然后详细的权限 API 调用和说明请参考 <a href="/docs/clients/restful-api/auth">Authentication-API</a> 文档</p>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+</span></span></code></pre></div><p>然后详细的权限 API 调用和说明请参考 <a href="/docs/clients/restful-api/auth">Authentication-API</a> 文档</p>
<h4 id="configauthenticator模式">ConfigAuthenticator模式</h4>
<p><code>ConfigAuthenticator</code>模式是通过预先在配置文件中设置用户信息来支持用户认证,该实现是基于配置好的静态<code>tokens</code>来验证用户是否合法。下面是具体的配置流程(重启服务生效):</p>
<p>在配置文件<code>gremlin-server.yaml</code>中配置<code>authenticator</code>及其<code>rest-server</code>文件路径:</p>
@@ -1472,12 +1491,12 @@ auth.graph_store=hugegraph
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">config</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span>{<span style="color:#204a87;font-weight:bold">tokens</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">conf/rest-server.properties}</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"></span>}<span style="color:#f8f8f8;text-decoration:underline">
</span></span></span></code></pre></div><p>在配置文件<code>rest-server.properties</code>中配置<code>authenticator</code>及其<code>tokens</code>信息:</p>
-<pre tabindex="0"><code class="language-properties" data-lang="properties">auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
-auth.admin_token=token-value-a
-auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
-</code></pre><p>在配置文件<code>hugegraph{n}.properties</code>中配置<code>gremlin.graph</code>信息:</p>
-<pre tabindex="0"><code class="language-properties" data-lang="properties">gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-</code></pre><h3 id="自定义用户认证系统">自定义用户认证系统</h3>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
+</span></span><span style="display:flex;"><span>auth.admin_token=token-value-a
+</span></span><span style="display:flex;"><span>auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
+</span></span></code></pre></div><p>在配置文件<code>hugegraph{n}.properties</code>中配置<code>gremlin.graph</code>信息:</p>
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+</span></span></code></pre></div><h3 id="自定义用户认证系统">自定义用户认证系统</h3>
<p>如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口<code>com.baidu.hugegraph.auth.HugeAuthenticator</code>即可,然后修改配置文件中<code>authenticator</code>配置项指向该实现。</p>Docs: 配置 HugeGraphServer 使用 https 协议 /cn/docs/config/config-https/Mon, 01 Jan 0001 00:00:00 +0000 /cn/docs/config/config-https/
<h3 id="概述">概述</h3>
<p>HugeGraphServer 默认使用的是 http 协议,如果用户对请求的安全性有要求,可以配置成 https。</p>
@@ -1534,13 +1553,13 @@ auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
</ol>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>keytool -genkey -alias serverkey -keyalg RSA -keystore server.keystore
</span></span></code></pre></div><p>过程中根据需求填写描述信息,默认证书的描述信息如下:</p>
-<pre tabindex="0"><code>名字和姓⽒:hugegraph
-组织单位名称:hugegraph
-组织名称:hugegraph
-城市或区域名称:BJ
-州或省份名称:BJ
-国家代码:CN
-</code></pre><ol start="2">
+<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-fallback" data-lang="fallback"><span style="display:flex;"><span>名字和姓⽒:hugegraph
+</span></span><span style="display:flex;"><span>组织单位名称:hugegraph
+</span></span><span style="display:flex;"><span>组织名称:hugegraph
+</span></span><span style="display:flex;"><span>城市或区域名称:BJ
+</span></span><span style="display:flex;"><span>州或省份名称:BJ
+</span></span><span style="display:flex;"><span>国家代码:CN
+</span></span></code></pre></div><ol start="2">
<li>根据服务端私钥,导出服务端证书</li>
</ol>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>keytool -export -alias serverkey -keystore server.keystore -file server.crt
diff --git a/cn/docs/guides/_print/index.html b/cn/docs/guides/_print/index.html
index 75e15df8f..47340deab 100644
--- a/cn/docs/guides/_print/index.html
+++ b/cn/docs/guides/_print/index.html
@@ -285,28 +285,28 @@
该命令用于查看当前图模式,包括:NONE、RESTORING、MERGING。
bin/hugegraph graph-mode-set -m RESTORING
该命令用于设置图模式,Restore 之前可以设置成 RESTORING 或者 MERGING 模式,例子中设置成 RESTORING。
步骤2:Restore 数据
bin/hugegraph restore -t all -d data
该命令将data目录下的全部元数据和图数据重新导入到 http://127.0.0.1 的 hugegraph 图中。
步骤3:恢复图模式
bin/hugegraph graph-mode-set -m NONE
-
该命令用于恢复图模式为 NONE。
至此,一次完整的图备份和图恢复流程结束。
帮助
备份和恢复命令的详细使用方式可以参考hugegraph-tools文档。
Backup/Restore使用和实现的API说明
Backup
Backup 使用元数据
和图数据
的相应的 list(GET) API 导出,并未增加新的 API。
Restore
Restore 使用元数据
和图数据
的相应的 create(POST) API 导入,并未增加新的 API。
Restore 时存在两种不同的模式: Restoring 和 Merging,另外,还有常规模式 NONE(默认),区别如下:
- None 模式,元数据和图数据的写入属于正常状态,可参见功能说明。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
实现的设置图模式的 RESTful API 如下:
查看某个图的模式. 该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/{graph}/mode
-
Response Status
200
+
该命令用于恢复图模式为 NONE。
至此,一次完整的图备份和图恢复流程结束。
帮助
备份和恢复命令的详细使用方式可以参考hugegraph-tools文档。
Backup/Restore使用和实现的API说明
Backup
Backup 使用元数据
和图数据
的相应的 list(GET) API 导出,并未增加新的 API。
Restore
Restore 使用元数据
和图数据
的相应的 create(POST) API 导入,并未增加新的 API。
Restore 时存在两种不同的模式: Restoring 和 Merging,另外,还有常规模式 NONE(默认),区别如下:
- None 模式,元数据和图数据的写入属于正常状态,可参见功能说明。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
实现的设置图模式的 RESTful API 如下:
查看某个图的模式. 该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/{graph}/mode
+
Response Status
200
Response Body
{
"mode": "NONE"
}
-
合法的图模式包括:NONE,RESTORING,MERGING
设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/{graph}/mode
-
Request Body
"RESTORING"
-
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
+
合法的图模式包括:NONE,RESTORING,MERGING
设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/{graph}/mode
+
Request Body
"RESTORING"
+
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
Response Body
{
"mode": "RESTORING"
}
-
5 - FAQ
如何选择后端存储? 选 RocksDB 还是 Cassandra 还是 Hbase 还是 Mysql?
根据你的具体需要来判断, 一般单机或数据量 < 100 亿推荐 RocksDB, 其他推荐使用分布式存储的后端集群
启动服务时提示:xxx (core dumped) xxx
请检查JDK版本是否为 Java11 (至少是Java8)
启动服务成功了,但是操作图时有类似于"无法连接到后端或连接未打开"的提示
第一次启动服务前,需要先使用init-store
初始化后端,后续版本会将提示得更清晰直接。
所有的后端在使用前都需要执行init-store
吗,序列化的选择可以随意填写么?
除了memory
不需要,其他后端均需要,如:cassandra
、hbase
和rocksdb
等,序列化需一一对应不可随意填写。
执行init-store
报错:Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni3226083071221514754.so: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by /tmp/librocksdbjni3226083071221514754.so)
RocksDB需要 gcc 4.3.0 (GLIBCXX_3.4.10) 及以上版本
执行init-store.sh
时报错:NoHostAvailableException
NoHostAvailableException
是指无法连接到Cassandra
服务,如果确定是要使用cassandra
后端,请先安装并启动这个服务。至于这个提示本身可能不够直白,我们会更新到文档进行说明的。
bin
目录下包含start-hugegraph.sh
、start-restserver.sh
和start-gremlinserver.sh
三个似乎与启动有关的脚本,到底该使用哪个
自0.3.3版本以来,已经把 GremlinServer 和 RestServer 合并为 HugeGraphServer 了,使用start-hugegraph.sh
启动即可,后两个在后续版本会被删掉。
配置了两个图,名字是hugegraph
和hugegraph1
,而启动服务的命令是start-hugegraph.sh
,是只打开了hugegraph
这个图吗
start-hugegraph.sh
会打开所有gremlin-server.yaml
的graphs
下的图,这二者并无名字上的直接关系
服务启动成功后,使用curl
查询所有顶点时返回乱码
服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 gunzip
进行解压(curl http://example | gunzip
),也可以用Firefox
的postman
或者Chrome
浏览器的restlet
插件发请求,会自动解压缩响应数据。
使用顶点Id通过RESTful API
查询顶点时返回空,但是顶点确实是存在的
检查顶点Id的类型,如果是字符串类型,API
的url
中的id部分需要加上双引号,数字类型则不用加。
已经根据需要给顶点Id加上了双引号,但是通过RESTful API
查询顶点时仍然返回空
检查顶点id中是否包含+
、空格
、/
、?
、%
、&
和=
这些URL的保留字符,如果存在则需要进行编码。下表给出了编码值:
特殊字符 | 编码值
---------| ----
-+ | %2B
-空格 | %20
-/ | %2F
-? | %3F
-% | %25
-# | %23
-& | %26
-= | %3D
-
查询某一类别的顶点或边(query by label
)时提示超时
由于属于某一label的数据量可能比较多,请加上limit限制。
通过RESTful API
操作图是可以的,但是发送Gremlin
语句就报错:Request Failed(500)
可能是GremlinServer
的配置有误,检查gremlin-server.yaml
的host
、port
是否与rest-server.properties
的gremlinserver.url
匹配,如不匹配则修改,然后重启服务。
使用Loader
导数据出现Socket Timeout
异常,然后导致Loader
中断
持续地导入数据会使Server
的压力过大,然后导致有些请求超时。可以通过调整Loader
的参数来适当缓解Server
压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。
如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremlin
的g.V().drop()
会报错Vertices in transaction have reached capacity xxx
目前确实没有好办法删除全部的数据,用户如果是自己部署的Server
和后端,可以直接清空数据库,重启Server
。可以使用paging API或scan API先获取所有数据,再逐条删除。
清空了数据库,并且执行了init-store
,但是添加schema
时提示"xxx has existed"
HugeGraphServer
内是有缓存的,清空数据库的同时是需要重启Server
的,否则残留的缓存会产生不一致。
插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}
或 Big id max length is 32768, but got xxx
为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。
是否支持嵌套属性,如果不支持,是否有什么替代方案
嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。
一个EdgeLabel
是否可以连接多对VertexLabel
,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"
一个EdgeLabel
不支持连接多对VertexLabel
,需要用户将EdgeLabel
拆分得更细一点,如:“个人投资”,“企业投资”。
通过RestAPI
发送请求时提示HTTP 415 Unsupported Media Type
请求头中需要指定Content-Type:application/json
其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues
+5 - FAQ
如何选择后端存储? 选 RocksDB 还是 Cassandra 还是 Hbase 还是 Mysql?
根据你的具体需要来判断, 一般单机或数据量 < 100 亿推荐 RocksDB, 其他推荐使用分布式存储的后端集群
启动服务时提示:xxx (core dumped) xxx
请检查JDK版本是否为 Java11 (至少是Java8)
启动服务成功了,但是操作图时有类似于"无法连接到后端或连接未打开"的提示
第一次启动服务前,需要先使用init-store
初始化后端,后续版本会将提示得更清晰直接。
所有的后端在使用前都需要执行init-store
吗,序列化的选择可以随意填写么?
除了memory
不需要,其他后端均需要,如:cassandra
、hbase
和rocksdb
等,序列化需一一对应不可随意填写。
执行init-store
报错:Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni3226083071221514754.so: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by /tmp/librocksdbjni3226083071221514754.so)
RocksDB需要 gcc 4.3.0 (GLIBCXX_3.4.10) 及以上版本
执行init-store.sh
时报错:NoHostAvailableException
NoHostAvailableException
是指无法连接到Cassandra
服务,如果确定是要使用cassandra
后端,请先安装并启动这个服务。至于这个提示本身可能不够直白,我们会更新到文档进行说明的。
bin
目录下包含start-hugegraph.sh
、start-restserver.sh
和start-gremlinserver.sh
三个似乎与启动有关的脚本,到底该使用哪个
自0.3.3版本以来,已经把 GremlinServer 和 RestServer 合并为 HugeGraphServer 了,使用start-hugegraph.sh
启动即可,后两个在后续版本会被删掉。
配置了两个图,名字是hugegraph
和hugegraph1
,而启动服务的命令是start-hugegraph.sh
,是只打开了hugegraph
这个图吗
start-hugegraph.sh
会打开所有gremlin-server.yaml
的graphs
下的图,这二者并无名字上的直接关系
服务启动成功后,使用curl
查询所有顶点时返回乱码
服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 gunzip
进行解压(curl http://example | gunzip
),也可以用Firefox
的postman
或者Chrome
浏览器的restlet
插件发请求,会自动解压缩响应数据。
使用顶点Id通过RESTful API
查询顶点时返回空,但是顶点确实是存在的
检查顶点Id的类型,如果是字符串类型,API
的url
中的id部分需要加上双引号,数字类型则不用加。
已经根据需要给顶点Id加上了双引号,但是通过RESTful API
查询顶点时仍然返回空
检查顶点id中是否包含+
、空格
、/
、?
、%
、&
和=
这些URL的保留字符,如果存在则需要进行编码。下表给出了编码值:
特殊字符 | 编码值
+--------| ----
++ | %2B
+空格 | %20
+/ | %2F
+? | %3F
+% | %25
+# | %23
+& | %26
+= | %3D
+
查询某一类别的顶点或边(query by label
)时提示超时
由于属于某一label的数据量可能比较多,请加上limit限制。
通过RESTful API
操作图是可以的,但是发送Gremlin
语句就报错:Request Failed(500)
可能是GremlinServer
的配置有误,检查gremlin-server.yaml
的host
、port
是否与rest-server.properties
的gremlinserver.url
匹配,如不匹配则修改,然后重启服务。
使用Loader
导数据出现Socket Timeout
异常,然后导致Loader
中断
持续地导入数据会使Server
的压力过大,然后导致有些请求超时。可以通过调整Loader
的参数来适当缓解Server
压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。
如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremlin
的g.V().drop()
会报错Vertices in transaction have reached capacity xxx
目前确实没有好办法删除全部的数据,用户如果是自己部署的Server
和后端,可以直接清空数据库,重启Server
。可以使用paging API或scan API先获取所有数据,再逐条删除。
清空了数据库,并且执行了init-store
,但是添加schema
时提示"xxx has existed"
HugeGraphServer
内是有缓存的,清空数据库的同时是需要重启Server
的,否则残留的缓存会产生不一致。
插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}
或 Big id max length is 32768, but got xxx
为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。
是否支持嵌套属性,如果不支持,是否有什么替代方案
嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。
一个EdgeLabel
是否可以连接多对VertexLabel
,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"
一个EdgeLabel
不支持连接多对VertexLabel
,需要用户将EdgeLabel
拆分得更细一点,如:“个人投资”,“企业投资”。
通过RestAPI
发送请求时提示HTTP 415 Unsupported Media Type
请求头中需要指定Content-Type:application/json
其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues
该命令用于查看当前图模式,包括:NONE、RESTORING、MERGING。
bin/hugegraph graph-mode-set -m RESTORING
该命令用于设置图模式,Restore 之前可以设置成 RESTORING 或者 MERGING 模式,例子中设置成 RESTORING。
bin/hugegraph restore -t all -d data
该命令将data目录下的全部元数据和图数据重新导入到 http://127.0.0.1 的 hugegraph 图中。
bin/hugegraph graph-mode-set -m NONE
-
该命令用于恢复图模式为 NONE。
至此,一次完整的图备份和图恢复流程结束。
备份和恢复命令的详细使用方式可以参考hugegraph-tools文档。
Backup 使用元数据
和图数据
的相应的 list(GET) API 导出,并未增加新的 API。
Restore 使用元数据
和图数据
的相应的 create(POST) API 导入,并未增加新的 API。
Restore 时存在两种不同的模式: Restoring 和 Merging,另外,还有常规模式 NONE(默认),区别如下:
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
实现的设置图模式的 RESTful API 如下:
GET http://localhost:8080/graphs/{graph}/mode
-
200
+
该命令用于恢复图模式为 NONE。
至此,一次完整的图备份和图恢复流程结束。
备份和恢复命令的详细使用方式可以参考hugegraph-tools文档。
Backup 使用元数据
和图数据
的相应的 list(GET) API 导出,并未增加新的 API。
Restore 使用元数据
和图数据
的相应的 create(POST) API 导入,并未增加新的 API。
Restore 时存在两种不同的模式: Restoring 和 Merging,另外,还有常规模式 NONE(默认),区别如下:
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
实现的设置图模式的 RESTful API 如下:
GET http://localhost:8080/graphs/{graph}/mode
+
200
{
"mode": "NONE"
}
-
合法的图模式包括:NONE,RESTORING,MERGING
PUT http://localhost:8080/graphs/{graph}/mode
-
"RESTORING"
-
合法的图模式包括:NONE,RESTORING,MERGING
200
+
合法的图模式包括:NONE,RESTORING,MERGING
PUT http://localhost:8080/graphs/{graph}/mode
+
"RESTORING"
+
合法的图模式包括:NONE,RESTORING,MERGING
200
{
"mode": "RESTORING"
}
diff --git a/cn/docs/guides/faq/index.html b/cn/docs/guides/faq/index.html
index 448d1dc20..506cb1288 100644
--- a/cn/docs/guides/faq/index.html
+++ b/cn/docs/guides/faq/index.html
@@ -68,17 +68,17 @@
Create child page
Create documentation issue
Create project issue
- Print entire section
如何选择后端存储? 选 RocksDB 还是 Cassandra 还是 Hbase 还是 Mysql?
根据你的具体需要来判断, 一般单机或数据量 < 100 亿推荐 RocksDB, 其他推荐使用分布式存储的后端集群
启动服务时提示:xxx (core dumped) xxx
请检查JDK版本是否为 Java11 (至少是Java8)
启动服务成功了,但是操作图时有类似于"无法连接到后端或连接未打开"的提示
第一次启动服务前,需要先使用init-store
初始化后端,后续版本会将提示得更清晰直接。
所有的后端在使用前都需要执行init-store
吗,序列化的选择可以随意填写么?
除了memory
不需要,其他后端均需要,如:cassandra
、hbase
和rocksdb
等,序列化需一一对应不可随意填写。
执行init-store
报错:Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni3226083071221514754.so: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by /tmp/librocksdbjni3226083071221514754.so)
RocksDB需要 gcc 4.3.0 (GLIBCXX_3.4.10) 及以上版本
执行init-store.sh
时报错:NoHostAvailableException
NoHostAvailableException
是指无法连接到Cassandra
服务,如果确定是要使用cassandra
后端,请先安装并启动这个服务。至于这个提示本身可能不够直白,我们会更新到文档进行说明的。
bin
目录下包含start-hugegraph.sh
、start-restserver.sh
和start-gremlinserver.sh
三个似乎与启动有关的脚本,到底该使用哪个
自0.3.3版本以来,已经把 GremlinServer 和 RestServer 合并为 HugeGraphServer 了,使用start-hugegraph.sh
启动即可,后两个在后续版本会被删掉。
配置了两个图,名字是hugegraph
和hugegraph1
,而启动服务的命令是start-hugegraph.sh
,是只打开了hugegraph
这个图吗
start-hugegraph.sh
会打开所有gremlin-server.yaml
的graphs
下的图,这二者并无名字上的直接关系
服务启动成功后,使用curl
查询所有顶点时返回乱码
服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 gunzip
进行解压(curl http://example | gunzip
),也可以用Firefox
的postman
或者Chrome
浏览器的restlet
插件发请求,会自动解压缩响应数据。
使用顶点Id通过RESTful API
查询顶点时返回空,但是顶点确实是存在的
检查顶点Id的类型,如果是字符串类型,API
的url
中的id部分需要加上双引号,数字类型则不用加。
已经根据需要给顶点Id加上了双引号,但是通过RESTful API
查询顶点时仍然返回空
检查顶点id中是否包含+
、空格
、/
、?
、%
、&
和=
这些URL的保留字符,如果存在则需要进行编码。下表给出了编码值:
特殊字符 | 编码值
---------| ----
-+ | %2B
-空格 | %20
-/ | %2F
-? | %3F
-% | %25
-# | %23
-& | %26
-= | %3D
-
查询某一类别的顶点或边(query by label
)时提示超时
由于属于某一label的数据量可能比较多,请加上limit限制。
通过RESTful API
操作图是可以的,但是发送Gremlin
语句就报错:Request Failed(500)
可能是GremlinServer
的配置有误,检查gremlin-server.yaml
的host
、port
是否与rest-server.properties
的gremlinserver.url
匹配,如不匹配则修改,然后重启服务。
使用Loader
导数据出现Socket Timeout
异常,然后导致Loader
中断
持续地导入数据会使Server
的压力过大,然后导致有些请求超时。可以通过调整Loader
的参数来适当缓解Server
压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。
如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremlin
的g.V().drop()
会报错Vertices in transaction have reached capacity xxx
目前确实没有好办法删除全部的数据,用户如果是自己部署的Server
和后端,可以直接清空数据库,重启Server
。可以使用paging API或scan API先获取所有数据,再逐条删除。
清空了数据库,并且执行了init-store
,但是添加schema
时提示"xxx has existed"
HugeGraphServer
内是有缓存的,清空数据库的同时是需要重启Server
的,否则残留的缓存会产生不一致。
插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}
或 Big id max length is 32768, but got xxx
为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。
是否支持嵌套属性,如果不支持,是否有什么替代方案
嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。
一个EdgeLabel
是否可以连接多对VertexLabel
,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"
一个EdgeLabel
不支持连接多对VertexLabel
,需要用户将EdgeLabel
拆分得更细一点,如:“个人投资”,“企业投资”。
通过RestAPI
发送请求时提示HTTP 415 Unsupported Media Type
请求头中需要指定Content-Type:application/json
其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues
如何选择后端存储? 选 RocksDB 还是 Cassandra 还是 Hbase 还是 Mysql?
根据你的具体需要来判断, 一般单机或数据量 < 100 亿推荐 RocksDB, 其他推荐使用分布式存储的后端集群
启动服务时提示:xxx (core dumped) xxx
请检查JDK版本是否为 Java11 (至少是Java8)
启动服务成功了,但是操作图时有类似于"无法连接到后端或连接未打开"的提示
第一次启动服务前,需要先使用init-store
初始化后端,后续版本会将提示得更清晰直接。
所有的后端在使用前都需要执行init-store
吗,序列化的选择可以随意填写么?
除了memory
不需要,其他后端均需要,如:cassandra
、hbase
和rocksdb
等,序列化需一一对应不可随意填写。
执行init-store
报错:Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni3226083071221514754.so: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by /tmp/librocksdbjni3226083071221514754.so)
RocksDB需要 gcc 4.3.0 (GLIBCXX_3.4.10) 及以上版本
执行init-store.sh
时报错:NoHostAvailableException
NoHostAvailableException
是指无法连接到Cassandra
服务,如果确定是要使用cassandra
后端,请先安装并启动这个服务。至于这个提示本身可能不够直白,我们会更新到文档进行说明的。
bin
目录下包含start-hugegraph.sh
、start-restserver.sh
和start-gremlinserver.sh
三个似乎与启动有关的脚本,到底该使用哪个
自0.3.3版本以来,已经把 GremlinServer 和 RestServer 合并为 HugeGraphServer 了,使用start-hugegraph.sh
启动即可,后两个在后续版本会被删掉。
配置了两个图,名字是hugegraph
和hugegraph1
,而启动服务的命令是start-hugegraph.sh
,是只打开了hugegraph
这个图吗
start-hugegraph.sh
会打开所有gremlin-server.yaml
的graphs
下的图,这二者并无名字上的直接关系
服务启动成功后,使用curl
查询所有顶点时返回乱码
服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 gunzip
进行解压(curl http://example | gunzip
),也可以用Firefox
的postman
或者Chrome
浏览器的restlet
插件发请求,会自动解压缩响应数据。
使用顶点Id通过RESTful API
查询顶点时返回空,但是顶点确实是存在的
检查顶点Id的类型,如果是字符串类型,API
的url
中的id部分需要加上双引号,数字类型则不用加。
已经根据需要给顶点Id加上了双引号,但是通过RESTful API
查询顶点时仍然返回空
检查顶点id中是否包含+
、空格
、/
、?
、%
、&
和=
这些URL的保留字符,如果存在则需要进行编码。下表给出了编码值:
特殊字符 | 编码值
+--------| ----
++ | %2B
+空格 | %20
+/ | %2F
+? | %3F
+% | %25
+# | %23
+& | %26
+= | %3D
+
查询某一类别的顶点或边(query by label
)时提示超时
由于属于某一label的数据量可能比较多,请加上limit限制。
通过RESTful API
操作图是可以的,但是发送Gremlin
语句就报错:Request Failed(500)
可能是GremlinServer
的配置有误,检查gremlin-server.yaml
的host
、port
是否与rest-server.properties
的gremlinserver.url
匹配,如不匹配则修改,然后重启服务。
使用Loader
导数据出现Socket Timeout
异常,然后导致Loader
中断
持续地导入数据会使Server
的压力过大,然后导致有些请求超时。可以通过调整Loader
的参数来适当缓解Server
压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。
如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremlin
的g.V().drop()
会报错Vertices in transaction have reached capacity xxx
目前确实没有好办法删除全部的数据,用户如果是自己部署的Server
和后端,可以直接清空数据库,重启Server
。可以使用paging API或scan API先获取所有数据,再逐条删除。
清空了数据库,并且执行了init-store
,但是添加schema
时提示"xxx has existed"
HugeGraphServer
内是有缓存的,清空数据库的同时是需要重启Server
的,否则残留的缓存会产生不一致。
插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}
或 Big id max length is 32768, but got xxx
为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。
是否支持嵌套属性,如果不支持,是否有什么替代方案
嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。
一个EdgeLabel
是否可以连接多对VertexLabel
,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"
一个EdgeLabel
不支持连接多对VertexLabel
,需要用户将EdgeLabel
拆分得更细一点,如:“个人投资”,“企业投资”。
通过RestAPI
发送请求时提示HTTP 415 Unsupported Media Type
请求头中需要指定Content-Type:application/json
其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues
This is the multi-page printable view of this section. -Click here to print.
CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|
48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD |
测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:
Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交
Single Insertion,单条插入,每个顶点或者每条边立即提交
Query,主要是图数据库的基本查询操作:
Clustering,基于Louvain Method的社区发现算法
测试使用人造数据和真实数据
MIW、SIW和QW使用SNAP数据集
CW使用LFR-Benchmark generator生成的人造数据
名称 | vertex数目 | edge数目 | 文件大小 |
---|---|---|---|
email-enron.txt | 36,691 | 367,661 | 4MB |
com-youtube.ungraph.txt | 1,157,806 | 2,987,624 | 38.7MB |
amazon0601.txt | 403,393 | 3,387,388 | 47.9MB |
com-lj.ungraph.txt | 3997961 | 34681189 | 479MB |
HugeGraph版本:0.5.6,RestServer和Gremlin Server和backends都在同一台服务器上
Titan版本:0.5.4, 使用thrift+Cassandra模式
Neo4j版本:2.0.1
graphdb-benchmark适配的Titan版本为0.5.4
Backend | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w) |
---|---|---|---|---|
HugeGraph | 0.629 | 5.711 | 5.243 | 67.033 |
Titan | 10.15 | 108.569 | 150.266 | 1217.944 |
Neo4j | 3.884 | 18.938 | 24.890 | 281.537 |
说明
Backend | email-enron(3.6w) | amazon0601(40w) | com-youtube.ungraph(120w) | com-lj.ungraph(400w) |
---|---|---|---|---|
HugeGraph | 4.072 | 45.118 | 66.006 | 609.083 |
Titan | 8.084 | 92.507 | 184.543 | 1099.371 |
Neo4j | 2.424 | 10.537 | 11.609 | 106.919 |
说明
Backend | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w) |
---|---|---|---|---|
HugeGraph | 1.540 | 10.764 | 11.243 | 151.271 |
Titan | 7.361 | 93.344 | 169.218 | 1085.235 |
Neo4j | 1.673 | 4.775 | 4.284 | 40.507 |
说明
Backend | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w) |
---|---|---|---|---|
HugeGraph | 0.494 | 0.103 | 3.364 | 8.155 |
Titan | 11.818 | 0.239 | 377.709 | 575.678 |
Neo4j | 1.719 | 1.800 | 1.956 | 8.530 |
说明
顶点 | 深度 | 一度 | 二度 | 三度 | 四度 | 五度 | 六度 |
---|---|---|---|---|---|---|---|
v1 | 时间 | 0.031s | 0.033s | 0.048s | 0.500s | 11.27s | OOM |
v111 | 时间 | 0.027s | 0.034s | 0.115 | 1.36s | OOM | – |
v1111 | 时间 | 0.039s | 0.027s | 0.052s | 0.511s | 10.96s | OOM |
说明
顶点 | 深度 | 一度 | 二度 | 三度 | 四度 | 五度 | 六度 |
---|---|---|---|---|---|---|---|
v1 | 时间 | 0.054s | 0.057s | 0.109s | 0.526s | 3.77s | OOM |
度 | 10 | 133 | 2453 | 50,830 | 1,128,688 | ||
v111 | 时间 | 0.032s | 0.042s | 0.136s | 1.25s | 20.62s | OOM |
度 | 10 | 211 | 4944 | 113150 | 2,629,970 | ||
v1111 | 时间 | 0.039s | 0.045s | 0.053s | 1.10s | 2.92s | OOM |
度 | 10 | 140 | 2555 | 50825 | 1,070,230 |
说明
数据库 | 规模1000 | 规模5000 | 规模10000 | 规模20000 |
---|---|---|---|---|
HugeGraph(core) | 20.804 | 242.099 | 744.780 | 1700.547 |
Titan | 45.790 | 820.633 | 2652.235 | 9568.623 |
Neo4j | 5.913 | 50.267 | 142.354 | 460.880 |
说明
HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:
HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:
之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况
被压机器信息
CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|
48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD,2.7T HDD |
注:起压机器和被压机器在同一机房
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
不断提升并发量,测试server仍能正常提供服务的压力上限
持续时间:5min
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
被压机器信息
CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|
48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD,2.7T HDD |
注:起压机器和被压机器在同一机房
后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。
不断提升并发量,测试server仍能正常提供服务的压力上限
持续时间:5min
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
被压机器信息
机器编号 | CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|---|
1 | 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz | 61G | 1000Mbps | 1.4T HDD |
2 | 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD,2.7T HDD |
注:起压机器和被压机器在同一机房
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
不断提升并发量,测试server仍能正常提供服务的压力上限
持续时间:5min
1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)
2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)
3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)
不断提升并发量,测试server仍能正常提供服务的压力上限
起压和被压机器配置相同,基本参数如下:
CPU | Memory | 网卡 |
---|---|---|
24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz | 61G | 1000Mbps |
测试工具:apache-Jmeter-2.5.1
batch_size_warn_threshold_in_kb: 1000
- batch_size_fail_threshold_in_kb: 1000
-
注:时间的单位均为ms
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
property_keys | 331000 | 1 | 1 | 2 | 0 | 172 | 0.00% | 920.7/sec | 178.1 |
vertex_labels | 331000 | 1 | 2 | 2 | 1 | 126 | 0.00% | 920.7/sec | 193.4 |
edge_labels | 331000 | 2 | 2 | 3 | 1 | 158 | 0.00% | 920.7/sec | 242.8 |
结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力
测试方法:固定并发量,测试server和后端的处理速率
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
single_insert_vertices | 331000 | 0 | 1 | 1 | 0 | 21 | 0.00% | 920.7/sec | 234.4 |
single_insert_edges | 331000 | 2 | 2 | 3 | 1 | 53 | 0.00% | 920.7/sec | 309.1 |
测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限
Concurrency | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
2000(vertex) | 661916 | 1 | 1 | 1 | 0 | 3012 | 0.00% | 1842.9/sec | 469.1 |
4000(vertex) | 1316124 | 13 | 1 | 14 | 0 | 9023 | 0.00% | 3673.1/sec | 935.0 |
5000(vertex) | 1468121 | 1010 | 1135 | 1227 | 0 | 9223 | 0.06% | 4095.6/sec | 1046.0 |
7000(vertex) | 1378454 | 1617 | 1708 | 1886 | 0 | 9361 | 0.08% | 3860.3/sec | 987.1 |
2000(edge) | 629399 | 953 | 1043 | 1113 | 1 | 9001 | 0.00% | 1750.3/sec | 587.6 |
3000(edge) | 648364 | 2258 | 2404 | 2500 | 2 | 9001 | 0.00% | 1810.7/sec | 607.9 |
4000(edge) | 649904 | 1992 | 2112 | 2211 | 1 | 9001 | 0.06% | 1812.5/sec | 608.5 |
测试方法:固定并发量,测试server和后端的处理速率
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
batch_insert_vertices | 37162 | 8959 | 9595 | 9704 | 17 | 9852 | 0.00% | 103.4/sec | 393.3 |
batch_insert_edges | 10800 | 31849 | 34544 | 35132 | 435 | 35747 | 0.00% | 28.8/sec | 814.9 |
当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据
测试均采用网址数据的边数据
CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|
48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD |
测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:
Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交
Single Insertion,单条插入,每个顶点或者每条边立即提交
Query,主要是图数据库的基本查询操作:
Clustering,基于Louvain Method的社区发现算法
测试使用人造数据和真实数据
MIW、SIW和QW使用SNAP数据集
CW使用LFR-Benchmark generator生成的人造数据
名称 | vertex数目 | edge数目 | 文件大小 |
---|---|---|---|
email-enron.txt | 36,691 | 367,661 | 4MB |
com-youtube.ungraph.txt | 1,157,806 | 2,987,624 | 38.7MB |
amazon0601.txt | 403,393 | 3,387,388 | 47.9MB |
graphdb-benchmark适配的Titan版本为0.5.4
Backend | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) |
---|---|---|---|
Titan | 9.516 | 88.123 | 111.586 |
RocksDB | 2.345 | 14.076 | 16.636 |
Cassandra | 11.930 | 108.709 | 101.959 |
Memory | 3.077 | 15.204 | 13.841 |
说明
Backend | email-enron(3.6w) | amazon0601(40w) | com-youtube.ungraph(120w) |
---|---|---|---|
Titan | 7.724 | 70.935 | 128.884 |
RocksDB | 8.876 | 65.852 | 63.388 |
Cassandra | 13.125 | 126.959 | 102.580 |
Memory | 22.309 | 207.411 | 165.609 |
说明
Backend | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) |
---|---|---|---|
Titan | 7.119 | 63.353 | 115.633 |
RocksDB | 6.032 | 64.526 | 52.721 |
Cassandra | 9.410 | 102.766 | 94.197 |
Memory | 12.340 | 195.444 | 140.89 |
说明
Backend | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) |
---|---|---|---|
Titan | 11.333 | 0.313 | 376.06 |
RocksDB | 44.391 | 2.221 | 268.792 |
Cassandra | 39.845 | 3.337 | 331.113 |
Memory | 35.638 | 2.059 | 388.987 |
说明
顶点 | 深度 | 一度 | 二度 | 三度 | 四度 | 五度 | 六度 |
---|---|---|---|---|---|---|---|
v1 | 时间 | 0.031s | 0.033s | 0.048s | 0.500s | 11.27s | OOM |
v111 | 时间 | 0.027s | 0.034s | 0.115 | 1.36s | OOM | – |
v1111 | 时间 | 0.039s | 0.027s | 0.052s | 0.511s | 10.96s | OOM |
说明
顶点 | 深度 | 一度 | 二度 | 三度 | 四度 | 五度 | 六度 |
---|---|---|---|---|---|---|---|
v1 | 时间 | 0.054s | 0.057s | 0.109s | 0.526s | 3.77s | OOM |
度 | 10 | 133 | 2453 | 50,830 | 1,128,688 | ||
v111 | 时间 | 0.032s | 0.042s | 0.136s | 1.25s | 20.62s | OOM |
度 | 10 | 211 | 4944 | 113150 | 2,629,970 | ||
v1111 | 时间 | 0.039s | 0.045s | 0.053s | 1.10s | 2.92s | OOM |
度 | 10 | 140 | 2555 | 50825 | 1,070,230 |
说明
数据库 | 规模1000 | 规模5000 | 规模10000 | 规模20000 |
---|---|---|---|---|
Titan | 45.943 | 849.168 | 2737.117 | 9791.46 |
Memory(core) | 41.077 | 1825.905 | * | * |
Cassandra(core) | 39.783 | 862.744 | 2423.136 | 6564.191 |
RocksDB(core) | 33.383 | 199.894 | 763.869 | 1677.813 |
说明
Return to the regular view of this page.
CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|
48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD |
测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:
Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交
Single Insertion,单条插入,每个顶点或者每条边立即提交
Query,主要是图数据库的基本查询操作:
Clustering,基于Louvain Method的社区发现算法
测试使用人造数据和真实数据
MIW、SIW和QW使用SNAP数据集
CW使用LFR-Benchmark generator生成的人造数据
名称 | vertex数目 | edge数目 | 文件大小 |
---|---|---|---|
email-enron.txt | 36,691 | 367,661 | 4MB |
com-youtube.ungraph.txt | 1,157,806 | 2,987,624 | 38.7MB |
amazon0601.txt | 403,393 | 3,387,388 | 47.9MB |
com-lj.ungraph.txt | 3997961 | 34681189 | 479MB |
HugeGraph版本:0.5.6,RestServer和Gremlin Server和backends都在同一台服务器上
Titan版本:0.5.4, 使用thrift+Cassandra模式
Neo4j版本:2.0.1
graphdb-benchmark适配的Titan版本为0.5.4
Backend | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w) |
---|---|---|---|---|
HugeGraph | 0.629 | 5.711 | 5.243 | 67.033 |
Titan | 10.15 | 108.569 | 150.266 | 1217.944 |
Neo4j | 3.884 | 18.938 | 24.890 | 281.537 |
说明
Backend | email-enron(3.6w) | amazon0601(40w) | com-youtube.ungraph(120w) | com-lj.ungraph(400w) |
---|---|---|---|---|
HugeGraph | 4.072 | 45.118 | 66.006 | 609.083 |
Titan | 8.084 | 92.507 | 184.543 | 1099.371 |
Neo4j | 2.424 | 10.537 | 11.609 | 106.919 |
说明
Backend | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w) |
---|---|---|---|---|
HugeGraph | 1.540 | 10.764 | 11.243 | 151.271 |
Titan | 7.361 | 93.344 | 169.218 | 1085.235 |
Neo4j | 1.673 | 4.775 | 4.284 | 40.507 |
说明
Backend | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w) |
---|---|---|---|---|
HugeGraph | 0.494 | 0.103 | 3.364 | 8.155 |
Titan | 11.818 | 0.239 | 377.709 | 575.678 |
Neo4j | 1.719 | 1.800 | 1.956 | 8.530 |
说明
顶点 | 深度 | 一度 | 二度 | 三度 | 四度 | 五度 | 六度 |
---|---|---|---|---|---|---|---|
v1 | 时间 | 0.031s | 0.033s | 0.048s | 0.500s | 11.27s | OOM |
v111 | 时间 | 0.027s | 0.034s | 0.115 | 1.36s | OOM | – |
v1111 | 时间 | 0.039s | 0.027s | 0.052s | 0.511s | 10.96s | OOM |
说明
顶点 | 深度 | 一度 | 二度 | 三度 | 四度 | 五度 | 六度 |
---|---|---|---|---|---|---|---|
v1 | 时间 | 0.054s | 0.057s | 0.109s | 0.526s | 3.77s | OOM |
度 | 10 | 133 | 2453 | 50,830 | 1,128,688 | ||
v111 | 时间 | 0.032s | 0.042s | 0.136s | 1.25s | 20.62s | OOM |
度 | 10 | 211 | 4944 | 113150 | 2,629,970 | ||
v1111 | 时间 | 0.039s | 0.045s | 0.053s | 1.10s | 2.92s | OOM |
度 | 10 | 140 | 2555 | 50825 | 1,070,230 |
说明
数据库 | 规模1000 | 规模5000 | 规模10000 | 规模20000 |
---|---|---|---|---|
HugeGraph(core) | 20.804 | 242.099 | 744.780 | 1700.547 |
Titan | 45.790 | 820.633 | 2652.235 | 9568.623 |
Neo4j | 5.913 | 50.267 | 142.354 | 460.880 |
说明
HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:
HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:
之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况
被压机器信息
CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|
48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD,2.7T HDD |
注:起压机器和被压机器在同一机房
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
不断提升并发量,测试server仍能正常提供服务的压力上限
持续时间:5min
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
被压机器信息
CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|
48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD,2.7T HDD |
注:起压机器和被压机器在同一机房
后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。
不断提升并发量,测试server仍能正常提供服务的压力上限
持续时间:5min
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
被压机器信息
机器编号 | CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|---|
1 | 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz | 61G | 1000Mbps | 1.4T HDD |
2 | 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD,2.7T HDD |
注:起压机器和被压机器在同一机房
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
不断提升并发量,测试server仍能正常提供服务的压力上限
持续时间:5min
1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)
2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)
3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)
不断提升并发量,测试server仍能正常提供服务的压力上限
起压和被压机器配置相同,基本参数如下:
CPU | Memory | 网卡 |
---|---|---|
24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz | 61G | 1000Mbps |
测试工具:apache-Jmeter-2.5.1
batch_size_warn_threshold_in_kb: 1000
+ batch_size_fail_threshold_in_kb: 1000
+
注:时间的单位均为ms
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
property_keys | 331000 | 1 | 1 | 2 | 0 | 172 | 0.00% | 920.7/sec | 178.1 |
vertex_labels | 331000 | 1 | 2 | 2 | 1 | 126 | 0.00% | 920.7/sec | 193.4 |
edge_labels | 331000 | 2 | 2 | 3 | 1 | 158 | 0.00% | 920.7/sec | 242.8 |
结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力
测试方法:固定并发量,测试server和后端的处理速率
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
single_insert_vertices | 331000 | 0 | 1 | 1 | 0 | 21 | 0.00% | 920.7/sec | 234.4 |
single_insert_edges | 331000 | 2 | 2 | 3 | 1 | 53 | 0.00% | 920.7/sec | 309.1 |
测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限
Concurrency | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
2000(vertex) | 661916 | 1 | 1 | 1 | 0 | 3012 | 0.00% | 1842.9/sec | 469.1 |
4000(vertex) | 1316124 | 13 | 1 | 14 | 0 | 9023 | 0.00% | 3673.1/sec | 935.0 |
5000(vertex) | 1468121 | 1010 | 1135 | 1227 | 0 | 9223 | 0.06% | 4095.6/sec | 1046.0 |
7000(vertex) | 1378454 | 1617 | 1708 | 1886 | 0 | 9361 | 0.08% | 3860.3/sec | 987.1 |
2000(edge) | 629399 | 953 | 1043 | 1113 | 1 | 9001 | 0.00% | 1750.3/sec | 587.6 |
3000(edge) | 648364 | 2258 | 2404 | 2500 | 2 | 9001 | 0.00% | 1810.7/sec | 607.9 |
4000(edge) | 649904 | 1992 | 2112 | 2211 | 1 | 9001 | 0.06% | 1812.5/sec | 608.5 |
测试方法:固定并发量,测试server和后端的处理速率
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
batch_insert_vertices | 37162 | 8959 | 9595 | 9704 | 17 | 9852 | 0.00% | 103.4/sec | 393.3 |
batch_insert_edges | 10800 | 31849 | 34544 | 35132 | 435 | 35747 | 0.00% | 28.8/sec | 814.9 |
当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据
测试均采用网址数据的边数据
CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|
48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD |
测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:
Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交
Single Insertion,单条插入,每个顶点或者每条边立即提交
Query,主要是图数据库的基本查询操作:
Clustering,基于Louvain Method的社区发现算法
测试使用人造数据和真实数据
MIW、SIW和QW使用SNAP数据集
CW使用LFR-Benchmark generator生成的人造数据
名称 | vertex数目 | edge数目 | 文件大小 |
---|---|---|---|
email-enron.txt | 36,691 | 367,661 | 4MB |
com-youtube.ungraph.txt | 1,157,806 | 2,987,624 | 38.7MB |
amazon0601.txt | 403,393 | 3,387,388 | 47.9MB |
graphdb-benchmark适配的Titan版本为0.5.4
Backend | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) |
---|---|---|---|
Titan | 9.516 | 88.123 | 111.586 |
RocksDB | 2.345 | 14.076 | 16.636 |
Cassandra | 11.930 | 108.709 | 101.959 |
Memory | 3.077 | 15.204 | 13.841 |
说明
Backend | email-enron(3.6w) | amazon0601(40w) | com-youtube.ungraph(120w) |
---|---|---|---|
Titan | 7.724 | 70.935 | 128.884 |
RocksDB | 8.876 | 65.852 | 63.388 |
Cassandra | 13.125 | 126.959 | 102.580 |
Memory | 22.309 | 207.411 | 165.609 |
说明
Backend | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) |
---|---|---|---|
Titan | 7.119 | 63.353 | 115.633 |
RocksDB | 6.032 | 64.526 | 52.721 |
Cassandra | 9.410 | 102.766 | 94.197 |
Memory | 12.340 | 195.444 | 140.89 |
说明
Backend | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) |
---|---|---|---|
Titan | 11.333 | 0.313 | 376.06 |
RocksDB | 44.391 | 2.221 | 268.792 |
Cassandra | 39.845 | 3.337 | 331.113 |
Memory | 35.638 | 2.059 | 388.987 |
说明
顶点 | 深度 | 一度 | 二度 | 三度 | 四度 | 五度 | 六度 |
---|---|---|---|---|---|---|---|
v1 | 时间 | 0.031s | 0.033s | 0.048s | 0.500s | 11.27s | OOM |
v111 | 时间 | 0.027s | 0.034s | 0.115 | 1.36s | OOM | – |
v1111 | 时间 | 0.039s | 0.027s | 0.052s | 0.511s | 10.96s | OOM |
说明
顶点 | 深度 | 一度 | 二度 | 三度 | 四度 | 五度 | 六度 |
---|---|---|---|---|---|---|---|
v1 | 时间 | 0.054s | 0.057s | 0.109s | 0.526s | 3.77s | OOM |
度 | 10 | 133 | 2453 | 50,830 | 1,128,688 | ||
v111 | 时间 | 0.032s | 0.042s | 0.136s | 1.25s | 20.62s | OOM |
度 | 10 | 211 | 4944 | 113150 | 2,629,970 | ||
v1111 | 时间 | 0.039s | 0.045s | 0.053s | 1.10s | 2.92s | OOM |
度 | 10 | 140 | 2555 | 50825 | 1,070,230 |
说明
数据库 | 规模1000 | 规模5000 | 规模10000 | 规模20000 |
---|---|---|---|---|
Titan | 45.943 | 849.168 | 2737.117 | 9791.46 |
Memory(core) | 41.077 | 1825.905 | * | * |
Cassandra(core) | 39.783 | 862.744 | 2423.136 | 6564.191 |
RocksDB(core) | 33.383 | 199.894 | 763.869 | 1677.813 |
说明
This is the multi-page printable view of this section. -Click here to print.
HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:
HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:
之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况
被压机器信息
CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|
48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD,2.7T HDD |
注:起压机器和被压机器在同一机房
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
不断提升并发量,测试server仍能正常提供服务的压力上限
持续时间:5min
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
被压机器信息
CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|
48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD,2.7T HDD |
注:起压机器和被压机器在同一机房
后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。
不断提升并发量,测试server仍能正常提供服务的压力上限
持续时间:5min
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
被压机器信息
机器编号 | CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|---|
1 | 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz | 61G | 1000Mbps | 1.4T HDD |
2 | 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD,2.7T HDD |
注:起压机器和被压机器在同一机房
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
不断提升并发量,测试server仍能正常提供服务的压力上限
持续时间:5min
1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)
2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)
3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)
不断提升并发量,测试server仍能正常提供服务的压力上限
起压和被压机器配置相同,基本参数如下:
CPU | Memory | 网卡 |
---|---|---|
24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz | 61G | 1000Mbps |
测试工具:apache-Jmeter-2.5.1
batch_size_warn_threshold_in_kb: 1000
- batch_size_fail_threshold_in_kb: 1000
-
注:时间的单位均为ms
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
property_keys | 331000 | 1 | 1 | 2 | 0 | 172 | 0.00% | 920.7/sec | 178.1 |
vertex_labels | 331000 | 1 | 2 | 2 | 1 | 126 | 0.00% | 920.7/sec | 193.4 |
edge_labels | 331000 | 2 | 2 | 3 | 1 | 158 | 0.00% | 920.7/sec | 242.8 |
结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力
测试方法:固定并发量,测试server和后端的处理速率
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
single_insert_vertices | 331000 | 0 | 1 | 1 | 0 | 21 | 0.00% | 920.7/sec | 234.4 |
single_insert_edges | 331000 | 2 | 2 | 3 | 1 | 53 | 0.00% | 920.7/sec | 309.1 |
测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限
Concurrency | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
2000(vertex) | 661916 | 1 | 1 | 1 | 0 | 3012 | 0.00% | 1842.9/sec | 469.1 |
4000(vertex) | 1316124 | 13 | 1 | 14 | 0 | 9023 | 0.00% | 3673.1/sec | 935.0 |
5000(vertex) | 1468121 | 1010 | 1135 | 1227 | 0 | 9223 | 0.06% | 4095.6/sec | 1046.0 |
7000(vertex) | 1378454 | 1617 | 1708 | 1886 | 0 | 9361 | 0.08% | 3860.3/sec | 987.1 |
2000(edge) | 629399 | 953 | 1043 | 1113 | 1 | 9001 | 0.00% | 1750.3/sec | 587.6 |
3000(edge) | 648364 | 2258 | 2404 | 2500 | 2 | 9001 | 0.00% | 1810.7/sec | 607.9 |
4000(edge) | 649904 | 1992 | 2112 | 2211 | 1 | 9001 | 0.06% | 1812.5/sec | 608.5 |
测试方法:固定并发量,测试server和后端的处理速率
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
batch_insert_vertices | 37162 | 8959 | 9595 | 9704 | 17 | 9852 | 0.00% | 103.4/sec | 393.3 |
batch_insert_edges | 10800 | 31849 | 34544 | 35132 | 435 | 35747 | 0.00% | 28.8/sec | 814.9 |
Return to the regular view of this page.
HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:
HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:
之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况
被压机器信息
CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|
48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD,2.7T HDD |
注:起压机器和被压机器在同一机房
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
不断提升并发量,测试server仍能正常提供服务的压力上限
持续时间:5min
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
被压机器信息
CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|
48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD,2.7T HDD |
注:起压机器和被压机器在同一机房
后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。
不断提升并发量,测试server仍能正常提供服务的压力上限
持续时间:5min
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
不断提升并发量,测试server仍能正常提供服务的压力上限
####### 结论:
####### 结论:
被压机器信息
机器编号 | CPU | Memory | 网卡 | 磁盘 |
---|---|---|---|---|
1 | 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz | 61G | 1000Mbps | 1.4T HDD |
2 | 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G | 10000Mbps | 750GB SSD,2.7T HDD |
注:起压机器和被压机器在同一机房
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
不断提升并发量,测试server仍能正常提供服务的压力上限
持续时间:5min
1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)
2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)
3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)
不断提升并发量,测试server仍能正常提供服务的压力上限
起压和被压机器配置相同,基本参数如下:
CPU | Memory | 网卡 |
---|---|---|
24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz | 61G | 1000Mbps |
测试工具:apache-Jmeter-2.5.1
batch_size_warn_threshold_in_kb: 1000
+ batch_size_fail_threshold_in_kb: 1000
+
注:时间的单位均为ms
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
property_keys | 331000 | 1 | 1 | 2 | 0 | 172 | 0.00% | 920.7/sec | 178.1 |
vertex_labels | 331000 | 1 | 2 | 2 | 1 | 126 | 0.00% | 920.7/sec | 193.4 |
edge_labels | 331000 | 2 | 2 | 3 | 1 | 158 | 0.00% | 920.7/sec | 242.8 |
结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力
测试方法:固定并发量,测试server和后端的处理速率
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
single_insert_vertices | 331000 | 0 | 1 | 1 | 0 | 21 | 0.00% | 920.7/sec | 234.4 |
single_insert_edges | 331000 | 2 | 2 | 3 | 1 | 53 | 0.00% | 920.7/sec | 309.1 |
测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限
Concurrency | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
2000(vertex) | 661916 | 1 | 1 | 1 | 0 | 3012 | 0.00% | 1842.9/sec | 469.1 |
4000(vertex) | 1316124 | 13 | 1 | 14 | 0 | 9023 | 0.00% | 3673.1/sec | 935.0 |
5000(vertex) | 1468121 | 1010 | 1135 | 1227 | 0 | 9223 | 0.06% | 4095.6/sec | 1046.0 |
7000(vertex) | 1378454 | 1617 | 1708 | 1886 | 0 | 9361 | 0.08% | 3860.3/sec | 987.1 |
2000(edge) | 629399 | 953 | 1043 | 1113 | 1 | 9001 | 0.00% | 1750.3/sec | 587.6 |
3000(edge) | 648364 | 2258 | 2404 | 2500 | 2 | 9001 | 0.00% | 1810.7/sec | 607.9 |
4000(edge) | 649904 | 1992 | 2112 | 2211 | 1 | 9001 | 0.06% | 1812.5/sec | 608.5 |
测试方法:固定并发量,测试server和后端的处理速率
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
batch_insert_vertices | 37162 | 8959 | 9595 | 9704 | 17 | 9852 | 0.00% | 103.4/sec | 393.3 |
batch_insert_edges | 10800 | 31849 | 34544 | 35132 | 435 | 35747 | 0.00% | 28.8/sec | 814.9 |
起压和被压机器配置相同,基本参数如下:
CPU | Memory | 网卡 |
---|---|---|
24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz | 61G | 1000Mbps |
测试工具:apache-Jmeter-2.5.1
batch_size_warn_threshold_in_kb: 1000
- batch_size_fail_threshold_in_kb: 1000
-
注:时间的单位均为ms
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
property_keys | 331000 | 1 | 1 | 2 | 0 | 172 | 0.00% | 920.7/sec | 178.1 |
vertex_labels | 331000 | 1 | 2 | 2 | 1 | 126 | 0.00% | 920.7/sec | 193.4 |
edge_labels | 331000 | 2 | 2 | 3 | 1 | 158 | 0.00% | 920.7/sec | 242.8 |
结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力
测试方法:固定并发量,测试server和后端的处理速率
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
single_insert_vertices | 331000 | 0 | 1 | 1 | 0 | 21 | 0.00% | 920.7/sec | 234.4 |
single_insert_edges | 331000 | 2 | 2 | 3 | 1 | 53 | 0.00% | 920.7/sec | 309.1 |
测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限
Concurrency | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
2000(vertex) | 661916 | 1 | 1 | 1 | 0 | 3012 | 0.00% | 1842.9/sec | 469.1 |
4000(vertex) | 1316124 | 13 | 1 | 14 | 0 | 9023 | 0.00% | 3673.1/sec | 935.0 |
5000(vertex) | 1468121 | 1010 | 1135 | 1227 | 0 | 9223 | 0.06% | 4095.6/sec | 1046.0 |
7000(vertex) | 1378454 | 1617 | 1708 | 1886 | 0 | 9361 | 0.08% | 3860.3/sec | 987.1 |
2000(edge) | 629399 | 953 | 1043 | 1113 | 1 | 9001 | 0.00% | 1750.3/sec | 587.6 |
3000(edge) | 648364 | 2258 | 2404 | 2500 | 2 | 9001 | 0.00% | 1810.7/sec | 607.9 |
4000(edge) | 649904 | 1992 | 2112 | 2211 | 1 | 9001 | 0.06% | 1812.5/sec | 608.5 |
测试方法:固定并发量,测试server和后端的处理速率
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
batch_insert_vertices | 37162 | 8959 | 9595 | 9704 | 17 | 9852 | 0.00% | 103.4/sec | 393.3 |
batch_insert_edges | 10800 | 31849 | 34544 | 35132 | 435 | 35747 | 0.00% | 28.8/sec | 814.9 |
起压和被压机器配置相同,基本参数如下:
CPU | Memory | 网卡 |
---|---|---|
24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz | 61G | 1000Mbps |
测试工具:apache-Jmeter-2.5.1
batch_size_warn_threshold_in_kb: 1000
+ batch_size_fail_threshold_in_kb: 1000
+
注:时间的单位均为ms
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
property_keys | 331000 | 1 | 1 | 2 | 0 | 172 | 0.00% | 920.7/sec | 178.1 |
vertex_labels | 331000 | 1 | 2 | 2 | 1 | 126 | 0.00% | 920.7/sec | 193.4 |
edge_labels | 331000 | 2 | 2 | 3 | 1 | 158 | 0.00% | 920.7/sec | 242.8 |
结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力
测试方法:固定并发量,测试server和后端的处理速率
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
single_insert_vertices | 331000 | 0 | 1 | 1 | 0 | 21 | 0.00% | 920.7/sec | 234.4 |
single_insert_edges | 331000 | 2 | 2 | 3 | 1 | 53 | 0.00% | 920.7/sec | 309.1 |
测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限
Concurrency | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
2000(vertex) | 661916 | 1 | 1 | 1 | 0 | 3012 | 0.00% | 1842.9/sec | 469.1 |
4000(vertex) | 1316124 | 13 | 1 | 14 | 0 | 9023 | 0.00% | 3673.1/sec | 935.0 |
5000(vertex) | 1468121 | 1010 | 1135 | 1227 | 0 | 9223 | 0.06% | 4095.6/sec | 1046.0 |
7000(vertex) | 1378454 | 1617 | 1708 | 1886 | 0 | 9361 | 0.08% | 3860.3/sec | 987.1 |
2000(edge) | 629399 | 953 | 1043 | 1113 | 1 | 9001 | 0.00% | 1750.3/sec | 587.6 |
3000(edge) | 648364 | 2258 | 2404 | 2500 | 2 | 9001 | 0.00% | 1810.7/sec | 607.9 |
4000(edge) | 649904 | 1992 | 2112 | 2211 | 1 | 9001 | 0.06% | 1812.5/sec | 608.5 |
测试方法:固定并发量,测试server和后端的处理速率
Label | Samples | Average | Median | 90%Line | Min | Max | Error% | Throughput | KB/sec |
---|---|---|---|---|---|---|---|---|---|
batch_insert_vertices | 37162 | 8959 | 9595 | 9704 | 17 | 9852 | 0.00% | 103.4/sec | 393.3 |
batch_insert_edges | 10800 | 31849 | 34544 | 35132 | 435 | 35747 | 0.00% | 28.8/sec | 814.9 |
This is the multi-page printable view of this section. Click here to print.
HugeGraph-Server 是 HugeGraph 项目的核心部分,包含Core、Backend、API等子模块。
Core模块是Tinkerpop接口的实现,Backend模块用于管理数据存储,目前支持的后端包括:Memory、Cassandra、ScyllaDB以及RocksDB,API模块提供HTTP Server,将Client的HTTP请求转化为对Core的调用。
文档中会大量出现
HugeGraph-Server
及HugeGraphServer
这两种写法,其他组件也类似。这两种写法含义上并无大的差异,可以这么区分:HugeGraph-Server
表示服务端相关组件代码,HugeGraphServer
表示服务进程。
请优先考虑在 Java11 的环境上启动 HugeGraph-Server
, 目前同时保留对 Java8 的兼容
在往下阅读之前务必执行java -version
命令查看jdk版本
java -version
如果使用的是RocksDB后端,请务必执行gcc --version
命令查看gcc版本;若使用其他后端,则不需要。
gcc --version
-
有三种方式可以部署HugeGraph-Server组件:
HugeGraph-Tools 提供了一键部署的命令行工具,用户可以使用该工具快速地一键下载、解压、配置并启动 HugeGraph-Server 和 HugeGraph-Hubble +
有三种方式可以部署HugeGraph-Server组件:
HugeGraph-Tools 提供了一键部署的命令行工具,用户可以使用该工具快速地一键下载、解压、配置并启动 HugeGraph-Server 和 HugeGraph-Hubble 最新的 HugeGraph-Toolchain 中已经包含所有的这些工具, 直接下载它解压就有工具包集合了
# download toolchain package, it includes loader + tool + hubble, please check the latest version (here is 1.0.0)
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph-*.tar.gz
# enter the tool's package
cd *hugegraph*/*tool*
注:${version}为版本号,最新版本号可参考Download页面,或直接从Download页面点击链接下载
HugeGraph-Tools 的总入口脚本是bin/hugegraph
,用户可以使用help
子命令查看其用法,这里只介绍一键部署的命令。
bin/hugegraph deploy -v {hugegraph-version} -p {install-path} [-u {download-path-prefix}]
{hugegraph-version}
表示要部署的HugeGraphServer及HugeGraphStudio的版本,用户可查看conf/version-mapping.yaml
文件获取版本信息,{install-path}
指定HugeGraphServer及HugeGraphStudio的安装目录,{download-path-prefix}
可选,指定HugeGraphServer及HugeGraphStudio tar包的下载地址,不提供时使用默认下载地址,比如要启动 0.6 版本的HugeGraph-Server及HugeGraphStudio将上述命令写为bin/hugegraph deploy -v 0.6 -p services
即可。
# use the latest version, here is 1.0.0 for example
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
源码编译前请确保安装了wget命令
下载HugeGraph源代码
git clone https://github.com/apache/hugegraph.git
编译打包生成tar包
cd hugegraph
@@ -33,37 +33,37 @@
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
......
-
执行成功后,在hugegraph目录下生成 hugegraph-*.tar.gz 文件,就是编译生成的tar包。
如果需要快速启动HugeGraph仅用于测试,那么只需要进行少数几个配置项的修改即可(见下一节)。 +
执行成功后,在hugegraph目录下生成 hugegraph-*.tar.gz 文件,就是编译生成的tar包。
可参考Docker部署方式。
如果需要快速启动HugeGraph仅用于测试,那么只需要进行少数几个配置项的修改即可(见下一节)。 详细的配置介绍请参考配置文档及配置项介绍
启动分为"首次启动"和"非首次启动",这么区分是因为在第一次启动前需要初始化后端数据库,然后启动服务。 而在人为停掉服务后,或者其他原因需要再次启动服务时,因为后端数据库是持久化存在的,直接启动服务即可。
HugeGraphServer启动时会连接后端存储并尝试检查后端存储版本号,如果未初始化后端或者后端已初始化但版本不匹配时(旧版本数据),HugeGraphServer会启动失败,并给出错误信息。
如果需要外部访问HugeGraphServer,请修改rest-server.properties
的restserver.url
配置项
-(默认为http://127.0.0.1:8080
),修改成机器名或IP地址。
由于各种后端所需的配置(hugegraph.properties)及启动步骤略有不同,下面逐一对各后端的配置及启动做介绍。
修改 hugegraph.properties
backend=memory
-serializer=text
-
Memory后端的数据是保存在内存中无法持久化的,不需要初始化后端,这也是唯一一个不需要初始化的后端。
启动 server
bin/start-hugegraph.sh
+(默认为http://127.0.0.1:8080
),修改成机器名或IP地址。由于各种后端所需的配置(hugegraph.properties)及启动步骤略有不同,下面逐一对各后端的配置及启动做介绍。
5.1 Memory
修改 hugegraph.properties
backend=memory
+serializer=text
+
Memory后端的数据是保存在内存中无法持久化的,不需要初始化后端,这也是唯一一个不需要初始化的后端。
启动 server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
提示的 url 与 rest-server.properties 中配置的 restserver.url 一致
5.2 RocksDB
RocksDB是一个嵌入式的数据库,不需要手动安装部署, 要求 GCC 版本 >= 4.3.0(GLIBCXX_3.4.10),如不满足,需要提前升级 GCC
修改 hugegraph.properties
backend=rocksdb
-serializer=binary
-rocksdb.data_path=.
-rocksdb.wal_path=.
-
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
+
提示的 url 与 rest-server.properties 中配置的 restserver.url 一致
5.2 RocksDB
RocksDB是一个嵌入式的数据库,不需要手动安装部署, 要求 GCC 版本 >= 4.3.0(GLIBCXX_3.4.10),如不满足,需要提前升级 GCC
修改 hugegraph.properties
backend=rocksdb
+serializer=binary
+rocksdb.data_path=.
+rocksdb.wal_path=.
+
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
bin/init-store.sh
启动server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
5.3 Cassandra
用户需自行安装 Cassandra,要求版本 3.0 以上,下载地址
修改 hugegraph.properties
backend=cassandra
-serializer=cassandra
-
-# cassandra backend config
-cassandra.host=localhost
-cassandra.port=9042
-cassandra.username=
-cassandra.password=
-#cassandra.connect_timeout=5
-#cassandra.read_timeout=20
-
-#cassandra.keyspace.strategy=SimpleStrategy
-#cassandra.keyspace.replication=3
-
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
+
5.3 Cassandra
用户需自行安装 Cassandra,要求版本 3.0 以上,下载地址
修改 hugegraph.properties
backend=cassandra
+serializer=cassandra
+
+# cassandra backend config
+cassandra.host=localhost
+cassandra.port=9042
+cassandra.username=
+cassandra.password=
+#cassandra.connect_timeout=5
+#cassandra.read_timeout=20
+
+#cassandra.keyspace.strategy=SimpleStrategy
+#cassandra.keyspace.replication=3
+
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
bin/init-store.sh
Initing HugeGraph Store...
2017-12-01 11:26:51 1424 [main] [INFO ] com.baidu.hugegraph.HugeGraph [] - Opening backend store: 'cassandra'
@@ -85,36 +85,36 @@
启动server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
5.4 ScyllaDB
用户需自行安装 ScyllaDB,推荐版本 2.1 以上,下载地址
修改 hugegraph.properties
backend=scylladb
-serializer=scylladb
-
-# cassandra backend config
-cassandra.host=localhost
-cassandra.port=9042
-cassandra.username=
-cassandra.password=
-#cassandra.connect_timeout=5
-#cassandra.read_timeout=20
-
-#cassandra.keyspace.strategy=SimpleStrategy
-#cassandra.keyspace.replication=3
-
由于 scylladb 数据库本身就是基于 cassandra 的"优化版",如果用户未安装 scylladb ,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
+
5.4 ScyllaDB
用户需自行安装 ScyllaDB,推荐版本 2.1 以上,下载地址
修改 hugegraph.properties
backend=scylladb
+serializer=scylladb
+
+# cassandra backend config
+cassandra.host=localhost
+cassandra.port=9042
+cassandra.username=
+cassandra.password=
+#cassandra.connect_timeout=5
+#cassandra.read_timeout=20
+
+#cassandra.keyspace.strategy=SimpleStrategy
+#cassandra.keyspace.replication=3
+
由于 scylladb 数据库本身就是基于 cassandra 的"优化版",如果用户未安装 scylladb ,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
bin/init-store.sh
启动server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
5.5 HBase
用户需自行安装 HBase,要求版本 2.0 以上,下载地址
修改 hugegraph.properties
backend=hbase
-serializer=hbase
-
-# hbase backend config
-hbase.hosts=localhost
-hbase.port=2181
-# Note: recommend to modify the HBase partition number by the actual/env data amount & RS amount before init store
-# it may influence the loading speed a lot
-#hbase.enable_partition=true
-#hbase.vertex_partitions=10
-#hbase.edge_partitions=30
-
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
+
5.5 HBase
用户需自行安装 HBase,要求版本 2.0 以上,下载地址
修改 hugegraph.properties
backend=hbase
+serializer=hbase
+
+# hbase backend config
+hbase.hosts=localhost
+hbase.port=2181
+# Note: recommend to modify the HBase partition number by the actual/env data amount & RS amount before init store
+# it may influence the loading speed a lot
+#hbase.enable_partition=true
+#hbase.vertex_partitions=10
+#hbase.edge_partitions=30
+
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
bin/init-store.sh
启动server
bin/start-hugegraph.sh
Starting HugeGraphServer...
@@ -124,11 +124,11 @@
curl
请求RESTfulAPI
echo `curl -o /dev/null -s -w %{http_code} "http://localhost:8080/graphs/hugegraph/graph/vertices"`
返回结果200,代表server启动正常
6.2 请求Server
HugeGraphServer的RESTful API包括多种类型的资源,典型的包括graph、schema、gremlin、traverser和task,
graph
包含vertices
、edges
schema
包含vertexlabels
、 propertykeys
、 edgelabels
、indexlabels
gremlin
包含各种Gremlin
语句,如g.v()
,可以同步或者异步执行traverser
包含各种高级查询,包括最短路径、交叉点、N步可达邻居等task
包含异步任务的查询和删除
6.2.1 获取hugegraph
的顶点及相关属性
curl http://localhost:8080/graphs/hugegraph/graph/vertices
说明
由于图的点和边很多,对于 list 型的请求,比如获取所有顶点,获取所有边等,Server 会将数据压缩再返回,
-所以使用 curl 时得到一堆乱码,可以重定向至 gunzip
进行解压。推荐使用 Chrome 浏览器 + Restlet 插件发送 HTTP 请求进行测试。
curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
-
当前HugeGraphServer的默认配置只能是本机访问,可以修改配置,使其能在其他机器访问。
vim conf/rest-server.properties
-
-restserver.url=http://0.0.0.0:8080
-
响应体如下:
{
+所以使用 curl 时得到一堆乱码,可以重定向至 gunzip
进行解压。推荐使用 Chrome 浏览器 + Restlet 插件发送 HTTP 请求进行测试。curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
+
当前HugeGraphServer的默认配置只能是本机访问,可以修改配置,使其能在其他机器访问。
vim conf/rest-server.properties
+
+restserver.url=http://0.0.0.0:8080
+
响应体如下:
{
"vertices": [
{
"id": "2lop",
@@ -177,18 +177,18 @@
...
]
}
-
详细的API请参考RESTful-API文档
7 停止Server
$cd hugegraph-${version}
+
详细的API请参考RESTful-API文档
7 停止Server
$cd hugegraph-${version}
$bin/stop-hugegraph.sh
-
2 - HugeGraph-Loader Quick Start
1 HugeGraph-Loader概述
HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
目前支持的数据源包括:
- 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
- HDFS 文件或目录,支持压缩文件
- 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server
本地磁盘文件和 HDFS 文件支持断点续传。
后面会具体说明。
注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start
2 获取 HugeGraph-Loader
有两种方式可以获取 HugeGraph-Loader:
- 下载已编译的压缩包
- 克隆源码编译安装
2.1 下载已编译的压缩包
下载最新版本的 HugeGraph-Toolchain Release 包, 里面包含了 loader + tool + hubble 全套工具, 如果你已经下载, 可跳过重复步骤
wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
+
2 - HugeGraph-Loader Quick Start
1 HugeGraph-Loader概述
HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
目前支持的数据源包括:
- 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
- HDFS 文件或目录,支持压缩文件
- 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server
本地磁盘文件和 HDFS 文件支持断点续传。
后面会具体说明。
注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start
2 获取 HugeGraph-Loader
有两种方式可以获取 HugeGraph-Loader:
- 下载已编译的压缩包
- 克隆源码编译安装
2.1 下载已编译的压缩包
下载最新版本的 HugeGraph-Toolchain Release 包, 里面包含了 loader + tool + hubble 全套工具, 如果你已经下载, 可跳过重复步骤
wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
2.2 克隆源码编译安装
克隆最新版本的 HugeGraph-Loader 源码包:
# 1. get from github
git clone https://github.com/apache/hugegraph-toolchain.git
# 2. get from direct (e.g. here is 1.0.0, please choose the latest version)
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
由于Oracle ojdbc license的限制,需要手动安装ojdbc到本地maven仓库。
-访问Oracle jdbc 下载 页面。选择Oracle Database 12c Release 2 (12.2.0.1) drivers,如下图所示。
打开链接后,选择“ojdbc8.jar”, 如下图所示。
把ojdbc8安装到本地maven仓库,进入ojdbc8.jar
所在目录,执行以下命令。
mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
-
编译生成 tar 包:
cd hugegraph-loader
+访问Oracle jdbc 下载 页面。选择Oracle Database 12c Release 2 (12.2.0.1) drivers,如下图所示。打开链接后,选择“ojdbc8.jar”, 如下图所示。
把ojdbc8安装到本地maven仓库,进入ojdbc8.jar
所在目录,执行以下命令。
mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
+
编译生成 tar 包:
cd hugegraph-loader
mvn clean package -DskipTests
3 使用流程
使用 HugeGraph-Loader 的基本流程分为以下几步:
- 编写图模型
- 准备数据文件
- 编写输入源映射文件
- 执行命令导入
3.1 编写图模型
这一步是建模的过程,用户需要对自己已有的数据和想要创建的图模型有一个清晰的构想,然后编写 schema 建立图模型。
比如想创建一个拥有两类顶点及两类边的图,顶点是"人"和"软件",边是"人认识人"和"人创造软件",并且这些顶点和边都带有一些属性,比如顶点"人"有:“姓名”、“年龄"等属性,
“软件"有:“名字”、“售卖价格"等属性;边"认识"有: “日期"属性等。
示例图模型
在设计好了图模型之后,我们可以用groovy
编写出schema
的定义,并保存至文件中,这里命名为schema.groovy
。
// 创建一些属性
@@ -207,25 +207,25 @@
schema.edgeLabel("knows").sourceLabel("person").targetLabel("person").ifNotExist().create();
// 创建 created 边类型,这类边是从 person 指向 software 的
schema.edgeLabel("created").sourceLabel("person").targetLabel("software").ifNotExist().create();
-
关于 schema 的详细说明请参考 hugegraph-client 中对应部分。
3.2 准备数据
目前 HugeGraph-Loader 支持的数据源包括:
- 本地磁盘文件或目录
- HDFS 文件或目录
- 部分关系型数据库
3.2.1 数据源结构
3.2.1.1 本地磁盘文件或目录
用户可以指定本地磁盘文件作为数据源,如果数据分散在多个文件中,也支持以某个目录作为数据源,但暂时不支持以多个目录作为数据源。
比如:我的数据分散在多个文件中,part-0、part-1 … part-n,要想执行导入,必须保证它们是放在一个目录下的。然后在 loader 的映射文件中,将path
指定为该目录即可。
支持的文件格式包括:
- TEXT
- CSV
- JSON
TEXT 是自定义分隔符的文本文件,第一行通常是标题,记录了每一列的名称,也允许没有标题行(在映射文件中指定)。其余的每行代表一条记录,会被转化为一个顶点/边;行的每一列对应一个字段,会被转化为顶点/边的 id、label 或属性;
示例如下:
id|name|lang|price|ISBN
-1|lop|java|328|ISBN978-7-107-18618-5
-2|ripple|java|199|ISBN978-7-100-13678-5
-
CSV 是分隔符为逗号,
的 TEXT 文件,当列值本身包含逗号时,该列值需要用双引号包起来,如:
marko,29,Beijing
-"li,nary",26,"Wu,han"
-
JSON 文件要求每一行都是一个 JSON 串,且每行的格式需保持一致。
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
+
关于 schema 的详细说明请参考 hugegraph-client 中对应部分。
3.2 准备数据
目前 HugeGraph-Loader 支持的数据源包括:
- 本地磁盘文件或目录
- HDFS 文件或目录
- 部分关系型数据库
3.2.1 数据源结构
3.2.1.1 本地磁盘文件或目录
用户可以指定本地磁盘文件作为数据源,如果数据分散在多个文件中,也支持以某个目录作为数据源,但暂时不支持以多个目录作为数据源。
比如:我的数据分散在多个文件中,part-0、part-1 … part-n,要想执行导入,必须保证它们是放在一个目录下的。然后在 loader 的映射文件中,将path
指定为该目录即可。
支持的文件格式包括:
- TEXT
- CSV
- JSON
TEXT 是自定义分隔符的文本文件,第一行通常是标题,记录了每一列的名称,也允许没有标题行(在映射文件中指定)。其余的每行代表一条记录,会被转化为一个顶点/边;行的每一列对应一个字段,会被转化为顶点/边的 id、label 或属性;
示例如下:
id|name|lang|price|ISBN
+1|lop|java|328|ISBN978-7-107-18618-5
+2|ripple|java|199|ISBN978-7-100-13678-5
+
CSV 是分隔符为逗号,
的 TEXT 文件,当列值本身包含逗号时,该列值需要用双引号包起来,如:
marko,29,Beijing
+"li,nary",26,"Wu,han"
+
JSON 文件要求每一行都是一个 JSON 串,且每行的格式需保持一致。
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
{"source_name": "marko", "target_name": "josh", "date": "20130220", "weight": 1.0}
-
3.2.1.2 HDFS 文件或目录
用户也可以指定 HDFS 文件或目录作为数据源,上面关于本地磁盘文件或目录
的要求全部适用于这里。除此之外,鉴于 HDFS 上通常存储的都是压缩文件,loader 也提供了对压缩文件的支持,并且本地磁盘文件或目录
同样支持压缩文件。
目前支持的压缩文件类型包括:GZIP、BZ2、XZ、LZMA、SNAPPY_RAW、SNAPPY_FRAMED、Z、DEFLATE、LZ4_BLOCK、LZ4_FRAMED、ORC 和 PARQUET。
3.2.1.3 主流关系型数据库
loader 还支持以部分关系型数据库作为数据源,目前支持 MySQL、PostgreSQL、Oracle 和 SQL Server。
但目前对表结构要求较为严格,如果导入过程中需要做关联查询,这样的表结构是不允许的。关联查询的意思是:在读到表的某行后,发现某列的值不能直接使用(比如外键),需要再去做一次查询才能确定该列的真实值。
举个例子:假设有三张表,person、software 和 created
// person 表结构
-id | name | age | city
-
// software 表结构
-id | name | lang | price
-
// created 表结构
-id | p_id | s_id | date
-
如果在建模(schema)时指定 person 或 software 的 id 策略是 PRIMARY_KEY,选择以 name 作为 primary keys(注意:这是 hugegraph 中 vertexlabel 的概念),在导入边数据时,由于需要拼接出源顶点和目标顶点的 id,必须拿着 p_id/s_id 去 person/software 表中查到对应的 name,这种需要做额外查询的表结构的情况,loader 暂时是不支持的。这时可以采用以下两种方式替代:
- 仍然指定 person 和 software 的 id 策略为 PRIMARY_KEY,但是以 person 表和 software 表的 id 列作为顶点的主键属性,这样导入边时直接使用 p_id 和 s_id 和顶点的 label 拼接就能生成 id 了;
- 指定 person 和 software 的 id 策略为 CUSTOMIZE,然后直接以 person 表和 software 表的 id 列作为顶点 id,这样导入边时直接使用 p_id 和 s_id 即可;
关键点就是要让边能直接使用 p_id 和 s_id,不要再去查一次。
3.2.2 准备顶点和边数据
3.2.2.1 顶点数据
顶点数据文件由一行一行的数据组成,一般每一行作为一个顶点,每一列会作为顶点属性。下面以 CSV 格式作为示例进行说明。
- person 顶点数据(数据本身不包含 header)
Tom,48,Beijing
-Jerry,36,Shanghai
-
- software 顶点数据(数据本身包含 header)
name,price
-Photoshop,999
-Office,388
-
3.2.2.2 边数据
边数据文件由一行一行的数据组成,一般每一行作为一条边,其中有部分列会作为源顶点和目标顶点的 id,其他列作为边属性。下面以 JSON 格式作为示例进行说明。
- knows 边数据
{"source_name": "Tom", "target_name": "Jerry", "date": "2008-12-12"}
+
3.2.1.2 HDFS 文件或目录
用户也可以指定 HDFS 文件或目录作为数据源,上面关于本地磁盘文件或目录
的要求全部适用于这里。除此之外,鉴于 HDFS 上通常存储的都是压缩文件,loader 也提供了对压缩文件的支持,并且本地磁盘文件或目录
同样支持压缩文件。
目前支持的压缩文件类型包括:GZIP、BZ2、XZ、LZMA、SNAPPY_RAW、SNAPPY_FRAMED、Z、DEFLATE、LZ4_BLOCK、LZ4_FRAMED、ORC 和 PARQUET。
3.2.1.3 主流关系型数据库
loader 还支持以部分关系型数据库作为数据源,目前支持 MySQL、PostgreSQL、Oracle 和 SQL Server。
但目前对表结构要求较为严格,如果导入过程中需要做关联查询,这样的表结构是不允许的。关联查询的意思是:在读到表的某行后,发现某列的值不能直接使用(比如外键),需要再去做一次查询才能确定该列的真实值。
举个例子:假设有三张表,person、software 和 created
// person 表结构
+id | name | age | city
+
// software 表结构
+id | name | lang | price
+
// created 表结构
+id | p_id | s_id | date
+
如果在建模(schema)时指定 person 或 software 的 id 策略是 PRIMARY_KEY,选择以 name 作为 primary keys(注意:这是 hugegraph 中 vertexlabel 的概念),在导入边数据时,由于需要拼接出源顶点和目标顶点的 id,必须拿着 p_id/s_id 去 person/software 表中查到对应的 name,这种需要做额外查询的表结构的情况,loader 暂时是不支持的。这时可以采用以下两种方式替代:
- 仍然指定 person 和 software 的 id 策略为 PRIMARY_KEY,但是以 person 表和 software 表的 id 列作为顶点的主键属性,这样导入边时直接使用 p_id 和 s_id 和顶点的 label 拼接就能生成 id 了;
- 指定 person 和 software 的 id 策略为 CUSTOMIZE,然后直接以 person 表和 software 表的 id 列作为顶点 id,这样导入边时直接使用 p_id 和 s_id 即可;
关键点就是要让边能直接使用 p_id 和 s_id,不要再去查一次。
3.2.2 准备顶点和边数据
3.2.2.1 顶点数据
顶点数据文件由一行一行的数据组成,一般每一行作为一个顶点,每一列会作为顶点属性。下面以 CSV 格式作为示例进行说明。
- person 顶点数据(数据本身不包含 header)
Tom,48,Beijing
+Jerry,36,Shanghai
+
- software 顶点数据(数据本身包含 header)
name,price
+Photoshop,999
+Office,388
+
3.2.2.2 边数据
边数据文件由一行一行的数据组成,一般每一行作为一条边,其中有部分列会作为源顶点和目标顶点的 id,其他列作为边属性。下面以 JSON 格式作为示例进行说明。
- knows 边数据
{"source_name": "Tom", "target_name": "Jerry", "date": "2008-12-12"}
- created 边数据
{"source_name": "Tom", "target_name": "Photoshop"}
{"source_name": "Tom", "target_name": "Office"}
{"source_name": "Jerry", "target_name": "Office"}
@@ -538,21 +538,22 @@
当然如果修改后的数据行仍然有问题,则会被再次记录到失败文件中(不用担心会有重复行)。每个顶点映射或边映射有数据插入失败时都会产生自己的失败文件,失败文件又分为解析失败文件(后缀 .parse-error)和插入失败文件(后缀 .insert-error),
它们被保存在 ${struct}/current
目录下。比如映射文件中有一个顶点映射 person 和边映射 knows,它们各有一些错误行,当 Loader 退出后,在
${struct}/current
目录下会看到如下文件:
- person-b4cd32ab.parse-error: 顶点映射 person 解析错误的数据
- person-b4cd32ab.insert-error: 顶点映射 person 插入错误的数据
- knows-eb6b2bac.parse-error: 边映射 knows 解析错误的数据
- knows-eb6b2bac.insert-error: 边映射 knows 插入错误的数据
.parse-error 和 .insert-error 并不总是一起存在的,只有存在解析出错的行才会有 .parse-error 文件,只有存在插入出错的行才会有 .insert-error 文件。
3.4.3 logs 目录文件说明
程序执行过程中各日志及错误数据会写入 hugegraph-loader.log 文件中。
3.4.4 执行命令
运行 bin/hugegraph-loader 并传入参数
bin/hugegraph-loader -g {GRAPH_NAME} -f ${INPUT_DESC_FILE} -s ${SCHEMA_FILE} -h {HOST} -p {PORT}
-
4 完整示例
下面给出的是 hugegraph-loader 包中 example 目录下的例子。
4.1 准备数据
顶点文件:example/file/vertex_person.csv
marko,29,Beijing
-vadas,27,Hongkong
-josh,32,Beijing
-peter,35,Shanghai
-"li,nary",26,"Wu,han"
-
顶点文件:example/file/vertex_software.txt
name|lang|price
-lop|java|328
-ripple|java|199
-
边文件:example/file/edge_knows.json
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
-{"source_name": "marko", "target_name": "josh", "date": "20130220", "weight": 1.0}
-
边文件:example/file/edge_created.json
{"aname": "marko", "bname": "lop", "date": "20171210", "weight": 0.4}
-{"aname": "josh", "bname": "lop", "date": "20091111", "weight": 0.4}
-{"aname": "josh", "bname": "ripple", "date": "20171210", "weight": 1.0}
-{"aname": "peter", "bname": "lop", "date": "20170324", "weight": 0.2}
-
4.2 编写schema
schema文件:example/file/schema.groovy
schema.propertyKey("name").asText().ifNotExist().create();
+
4 完整示例
下面给出的是 hugegraph-loader 包中 example 目录下的例子。(GitHub 地址)
4.1 准备数据
顶点文件:example/file/vertex_person.csv
marko,29,Beijing
+vadas,27,Hongkong
+josh,32,Beijing
+peter,35,Shanghai
+"li,nary",26,"Wu,han"
+tom,null,NULL
+
顶点文件:example/file/vertex_software.txt
id|name|lang|price|ISBN
+1|lop|java|328|ISBN978-7-107-18618-5
+2|ripple|java|199|ISBN978-7-100-13678-5
+
边文件:example/file/edge_knows.json
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
+{"source_name": "marko", "target_name": "josh", "date": "20130220", "weight": 1.0}
+
边文件:example/file/edge_created.json
{"aname": "marko", "bname": "lop", "date": "20171210", "weight": 0.4}
+{"aname": "josh", "bname": "lop", "date": "20091111", "weight": 0.4}
+{"aname": "josh", "bname": "ripple", "date": "20171210", "weight": 1.0}
+{"aname": "peter", "bname": "lop", "date": "20170324", "weight": 0.2}
+
4.2 编写schema
schema文件:example/file/schema.groovy
schema.propertyKey("name").asText().ifNotExist().create();
schema.propertyKey("age").asInt().ifNotExist().create();
schema.propertyKey("city").asText().ifNotExist().create();
schema.propertyKey("weight").asDouble().ifNotExist().create();
@@ -563,7 +564,6 @@
schema.vertexLabel("person").properties("name", "age", "city").primaryKeys("name").ifNotExist().create();
schema.vertexLabel("software").properties("name", "lang", "price").primaryKeys("name").ifNotExist().create();
-schema.indexLabel("personByName").onV("person").by("name").secondary().ifNotExist().create();
schema.indexLabel("personByAge").onV("person").by("age").range().ifNotExist().create();
schema.indexLabel("personByCity").onV("person").by("city").secondary().ifNotExist().create();
schema.indexLabel("personByAgeAndCity").onV("person").by("age", "city").secondary().ifNotExist().create();
@@ -581,26 +581,27 @@
"label": "person",
"input": {
"type": "file",
- "path": "example/vertex_person.csv",
+ "path": "example/file/vertex_person.csv",
"format": "CSV",
"header": ["name", "age", "city"],
- "charset": "UTF-8"
+ "charset": "UTF-8",
+ "skipped_line": {
+ "regex": "(^#|^//).*"
+ }
},
- "mapping": {
- "name": "name",
- "age": "age",
- "city": "city"
- }
+ "null_values": ["NULL", "null", ""]
},
{
"label": "software",
"input": {
"type": "file",
- "path": "example/vertex_software.text",
+ "path": "example/file/vertex_software.txt",
"format": "TEXT",
"delimiter": "|",
"charset": "GBK"
- }
+ },
+ "id": "id",
+ "ignored": ["ISBN"]
}
],
"edges": [
@@ -610,70 +611,71 @@
"target": ["target_name"],
"input": {
"type": "file",
- "path": "example/edge_knows.json",
- "format": "JSON"
+ "path": "example/file/edge_knows.json",
+ "format": "JSON",
+ "date_format": "yyyyMMdd"
},
- "mapping": {
+ "field_mapping": {
"source_name": "name",
"target_name": "name"
}
},
{
"label": "created",
- "source": ["aname"],
- "target": ["bname"],
+ "source": ["source_name"],
+ "target": ["target_id"],
"input": {
"type": "file",
- "path": "example/edge_created.json",
- "format": "JSON"
+ "path": "example/file/edge_created.json",
+ "format": "JSON",
+ "date_format": "yyyy-MM-dd"
},
- "mapping": {
- "aname": "name",
- "bname": "name"
+ "field_mapping": {
+ "source_name": "name"
}
}
]
}
4.4 执行命令导入
sh bin/hugegraph-loader.sh -g hugegraph -f example/file/struct.json -s example/file/schema.groovy
-
导入结束后,会出现类似如下统计信息:
vertices/edges has been loaded this time : 8/6
---------------------------------------------------
-count metrics
- input read success : 14
- input read failure : 0
- vertex parse success : 8
- vertex parse failure : 0
- vertex insert success : 8
- vertex insert failure : 0
- edge parse success : 6
- edge parse failure : 0
- edge insert success : 6
- edge insert failure : 0
-
4.5 使用 spark-loader 导入
Spark 版本:Spark 3+, 其他版本未测试。
+
导入结束后,会出现类似如下统计信息:
vertices/edges has been loaded this time : 8/6
+--------------------------------------------------
+count metrics
+ input read success : 14
+ input read failure : 0
+ vertex parse success : 8
+ vertex parse failure : 0
+ vertex insert success : 8
+ vertex insert failure : 0
+ edge parse success : 6
+ edge parse failure : 0
+ edge insert success : 6
+ edge insert failure : 0
+
4.5 使用 spark-loader 导入
Spark 版本:Spark 3+, 其他版本未测试。
HugeGraph Toolchain 版本: toolchain-1.0.0
spark-loader
的参数分为两部分,注意:因二者参数名缩写存在重合部分,请使用参数全称。两种参数之间无需保证先后顺序。
- hugegraph 参数(参考:hugegraph-loader 参数说明 )
- Spark 任务提交参数 (参考:Submitting Applications)
示例:
sh bin/hugegraph-spark-loader.sh --master yarn \
--deploy-mode cluster --name spark-hugegraph-loader --file ./hugegraph.json \
--username admin --token admin --host xx.xx.xx.xx --port 8093 \
--graph graph-test --num-executors 6 --executor-cores 16 --executor-memory 15g
-
3 - HugeGraph-Tools Quick Start
1 HugeGraph-Tools概述
HugeGraph-Tools 是 HugeGraph 的自动化部署、管理和备份/还原组件。
2 获取 HugeGraph-Tools
有两种方式可以获取 HugeGraph-Tools:(它被包含子 Toolchain 中)
- 下载二进制tar包
- 下载源码编译安装
2.1 下载二进制tar包
下载最新版本的 HugeGraph-Toolchain 包, 然后进入 tools 子目录
wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
+
3 - HugeGraph-Tools Quick Start
1 HugeGraph-Tools概述
HugeGraph-Tools 是 HugeGraph 的自动化部署、管理和备份/还原组件。
2 获取 HugeGraph-Tools
有两种方式可以获取 HugeGraph-Tools:(它被包含子 Toolchain 中)
- 下载二进制tar包
- 下载源码编译安装
2.1 下载二进制tar包
下载最新版本的 HugeGraph-Toolchain 包, 然后进入 tools 子目录
wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
2.2 下载源码编译安装
源码编译前请确保安装了wget命令
下载最新版本的 HugeGraph-Toolchain 源码包, 然后根目录编译或者单独编译 tool 子模块:
# 1. get from github
git clone https://github.com/apache/hugegraph-toolchain.git
# 2. get from direct (e.g. here is 1.0.0, please choose the latest version)
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
编译生成 tar 包:
cd hugegraph-tools
mvn package -DskipTests
生成 tar 包 hugegraph-tools-${version}.tar.gz
3 使用
3.1 功能概览
解压后,进入 hugegraph-tools 目录,可以使用bin/hugegraph
或者bin/hugegraph help
来查看 usage 信息。主要分为:
- 图管理类,graph-mode-set、graph-mode-get、graph-list、graph-get 和 graph-clear
- 异步任务管理类,task-list、task-get、task-delete、task-cancel 和 task-clear
- Gremlin类,gremlin-execute 和 gremlin-schedule
- 备份/恢复类,backup、restore、migrate、schedule-backup 和 dump
- 安装部署类,deploy、clear、start-all 和 stop-all
Usage: hugegraph [options] [command] [command options]
-
3.2 [options]-全局变量
options
是 HugeGraph-Tools 的全局变量,可以在 hugegraph-tools/bin/hugegraph 中配置,包括:
- –graph,HugeGraph-Tools 操作的图的名字,默认值是 hugegraph
- –url,HugeGraph-Server 的服务地址,默认是 http://127.0.0.1:8080
- –user,当 HugeGraph-Server 开启认证时,传递用户名
- –password,当 HugeGraph-Server 开启认证时,传递用户的密码
- –timeout,连接 HugeGraph-Server 时的超时时间,默认是 30s
- –trust-store-file,证书文件的路径,当 –url 使用 https 时,HugeGraph-Client 使用的 truststore 文件,默认为空,代表使用 hugegraph-tools 内置的 truststore 文件 conf/hugegraph.truststore
- –trust-store-password,证书文件的密码,当 –url 使用 https 时,HugeGraph-Client 使用的 truststore 的密码,默认为空,代表使用 hugegraph-tools 内置的 truststore 文件的密码
上述全局变量,也可以通过环境变量来设置。一种方式是在命令行使用 export 设置临时环境变量,在该命令行关闭之前均有效
全局变量 环境变量 示例 –url HUGEGRAPH_URL export HUGEGRAPH_URL=http://127.0.0.1:8080 –graph HUGEGRAPH_GRAPH export HUGEGRAPH_GRAPH=hugegraph –user HUGEGRAPH_USERNAME export HUGEGRAPH_USERNAME=admin –password HUGEGRAPH_PASSWORD export HUGEGRAPH_PASSWORD=test –timeout HUGEGRAPH_TIMEOUT export HUGEGRAPH_TIMEOUT=30 –trust-store-file HUGEGRAPH_TRUST_STORE_FILE export HUGEGRAPH_TRUST_STORE_FILE=/tmp/trust-store –trust-store-password HUGEGRAPH_TRUST_STORE_PASSWORD export HUGEGRAPH_TRUST_STORE_PASSWORD=xxxx
另一种方式是在 bin/hugegraph 脚本中设置环境变量:
#!/bin/bash
-
-# Set environment here if needed
-#export HUGEGRAPH_URL=
-#export HUGEGRAPH_GRAPH=
-#export HUGEGRAPH_USERNAME=
-#export HUGEGRAPH_PASSWORD=
-#export HUGEGRAPH_TIMEOUT=
-#export HUGEGRAPH_TRUST_STORE_FILE=
-#export HUGEGRAPH_TRUST_STORE_PASSWORD=
-
3.3 图管理类,graph-mode-set、graph-mode-get、graph-list、graph-get和graph-clear
- graph-mode-set,设置图的 restore mode
- –graph-mode 或者 -m,必填项,指定将要设置的模式,合法值包括 [NONE, RESTORING, MERGING, LOADING]
- graph-mode-get,获取图的 restore mode
- graph-list,列出某个 HugeGraph-Server 中全部的图
- graph-get,获取某个图及其存储后端类型
- graph-clear,清除某个图的全部 schema 和 data
- –confirm-message 或者 -c,必填项,删除确认信息,需要手动输入,二次确认防止误删,“I’m sure to delete all data”,包括双引号
当需要把备份的图原样恢复到一个新的图中的时候,需要先将图模式设置为 RESTORING 模式;当需要将备份的图合并到已存在的图中时,需要先将图模式设置为 MERGING 模式。
3.4 异步任务管理类,task-list、task-get和task-delete
- task-list,列出某个图中的异步任务,可以根据任务的状态过滤
- –status,选填项,指定要查看的任务的状态,即按状态过滤任务
- –limit,选填项,指定要获取的任务的数目,默认为 -1,意思为获取全部符合条件的任务
- task-get,获取某个异步任务的详细信息
- –task-id,必填项,指定异步任务的 ID
- task-delete,删除某个异步任务的信息
- –task-id,必填项,指定异步任务的 ID
- task-cancel,取消某个异步任务的执行
- –task-id,要取消的异步任务的 ID
- task-clear,清理完成的异步任务
- –force,选填项,设置时,表示清理全部异步任务,未执行完成的先取消,然后清除所有异步任务。默认只清理已完成的异步任务
3.5 Gremlin类,gremlin-execute和gremlin-schedule
- gremlin-execute,发送 Gremlin 语句到 HugeGraph-Server 来执行查询或修改操作,同步执行,结束后返回结果
- –file 或者 -f,指定要执行的脚本文件,UTF-8编码,与 –script 互斥
- –script 或者 -s,指定要执行的脚本字符串,与 –file 互斥
- –aliases 或者 -a,Gremlin 别名设置,格式为:key1=value1,key2=value2,…
- –bindings 或者 -b,Gremlin 绑定设置,格式为:key1=value1,key2=value2,…
- –language 或者 -l,Gremlin 脚本的语言,默认为 gremlin-groovy
–file 和 –script 二者互斥,必须设置其中之一
- gremlin-schedule,发送 Gremlin 语句到 HugeGraph-Server 来执行查询或修改操作,异步执行,任务提交后立刻返回异步任务id
- –file 或者 -f,指定要执行的脚本文件,UTF-8编码,与 –script 互斥
- –script 或者 -s,指定要执行的脚本字符串,与 –file 互斥
- –bindings 或者 -b,Gremlin 绑定设置,格式为:key1=value1,key2=value2,…
- –language 或者 -l,Gremlin 脚本的语言,默认为 gremlin-groovy
–file 和 –script 二者互斥,必须设置其中之一
3.6 备份/恢复类
- backup,将某张图中的 schema 或者 data 备份到 HugeGraph 系统之外,以 JSON 形式存在本地磁盘或者 HDFS
- –format,备份的格式,可选值包括 [json, text],默认为 json
- –all-properties,是否备份顶点/边全部的属性,仅在 –format 为 text 是有效,默认 false
- –label,要备份的顶点/边的类型,仅在 –format 为 text 是有效,只有备份顶点或者边的时候有效
- –properties,要备份的顶点/边的属性,逗号分隔,仅在 –format 为 text 是有效,只有备份顶点或者边的时候有效
- –compress,备份时是否压缩数据,默认为 true
- –directory 或者 -d,存储 schema 或者 data 的目录,本地目录时,默认为’./{graphName}’,HDFS 时,默认为 ‘{fs.default.name}/{graphName}’
- –huge-types 或者 -t,要备份的数据类型,逗号分隔,可选值为 ‘all’ 或者 一个或多个 [vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,‘all’ 代表全部6种类型,即顶点、边和所有schema
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- –split-size 或者 -s,指定在备份时对顶点或者边分块的大小,默认为 1048576
- -D,用 -Dkey=value 的模式指定动态参数,用来备份数据到 HDFS 时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
- restore,将 JSON 格式存储的 schema 或者 data 恢复到一个新图中(RESTORING 模式)或者合并到已存在的图中(MERGING 模式)
- –directory 或者 -d,存储 schema 或者 data 的目录,本地目录时,默认为’./{graphName}’,HDFS 时,默认为 ‘{fs.default.name}/{graphName}’
- –clean,是否在恢复图完成后删除 –directory 指定的目录,默认为 false
- –huge-types 或者 -t,要恢复的数据类型,逗号分隔,可选值为 ‘all’ 或者 一个或多个 [vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,‘all’ 代表全部6种类型,即顶点、边和所有schema
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- -D,用 -Dkey=value 的模式指定动态参数,用来从 HDFS 恢复图时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
只有当 –format 为 json 执行 backup 时,才可以使用 restore 命令恢复
- migrate, 将当前连接的图迁移至另一个 HugeGraphServer 中
- –target-graph,目标图的名字,默认为 hugegraph
- –target-url,目标图所在的 HugeGraphServer,默认为 http://127.0.0.1:8081
- –target-username,访问目标图的用户名
- –target-password,访问目标图的密码
- –target-timeout,访问目标图的超时时间
- –target-trust-store-file,访问目标图使用的 truststore 文件
- –target-trust-store-password,访问目标图使用的 truststore 的密码
- –directory 或者 -d,迁移过程中,存储源图的 schema 或者 data 的目录,本地目录时,默认为’./{graphName}’,HDFS 时,默认为 ‘{fs.default.name}/{graphName}’
- –huge-types 或者 -t,要迁移的数据类型,逗号分隔,可选值为 ‘all’ 或者 一个或多个 [vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,‘all’ 代表全部6种类型,即顶点、边和所有schema
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- –split-size 或者 -s,指定迁移过程中对源图进行备份时顶点或者边分块的大小,默认为 1048576
- -D,用 -Dkey=value 的模式指定动态参数,用来在迁移图过程中需要备份数据到 HDFS 时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
- –graph-mode 或者 -m,将源图恢复到目标图时将目标图设置的模式,合法值包括 [RESTORING, MERGING]
- –keep-local-data,是否保留在迁移图的过程中产生的源图的备份,默认为 false,即默认迁移图结束后不保留产生的源图备份
- schedule-backup,周期性对图执行备份操作,并保留一定数目的最新备份(目前仅支持本地文件系统)
- –directory 或者 -d,必填项,指定备份数据的目录
- –backup-num,选填项,指定保存的最新的备份的数目,默认为 3
- –interval,选填项,指定进行备份的周期,格式同 Linux crontab 格式
- dump,把整张图的顶点和边全部导出,默认以
vertex vertex-edge1 vertex-edge2...
JSON格式存储。
+
3.2 [options]-全局变量
options
是 HugeGraph-Tools 的全局变量,可以在 hugegraph-tools/bin/hugegraph 中配置,包括:
- –graph,HugeGraph-Tools 操作的图的名字,默认值是 hugegraph
- –url,HugeGraph-Server 的服务地址,默认是 http://127.0.0.1:8080
- –user,当 HugeGraph-Server 开启认证时,传递用户名
- –password,当 HugeGraph-Server 开启认证时,传递用户的密码
- –timeout,连接 HugeGraph-Server 时的超时时间,默认是 30s
- –trust-store-file,证书文件的路径,当 –url 使用 https 时,HugeGraph-Client 使用的 truststore 文件,默认为空,代表使用 hugegraph-tools 内置的 truststore 文件 conf/hugegraph.truststore
- –trust-store-password,证书文件的密码,当 –url 使用 https 时,HugeGraph-Client 使用的 truststore 的密码,默认为空,代表使用 hugegraph-tools 内置的 truststore 文件的密码
上述全局变量,也可以通过环境变量来设置。一种方式是在命令行使用 export 设置临时环境变量,在该命令行关闭之前均有效
全局变量 环境变量 示例 –url HUGEGRAPH_URL export HUGEGRAPH_URL=http://127.0.0.1:8080 –graph HUGEGRAPH_GRAPH export HUGEGRAPH_GRAPH=hugegraph –user HUGEGRAPH_USERNAME export HUGEGRAPH_USERNAME=admin –password HUGEGRAPH_PASSWORD export HUGEGRAPH_PASSWORD=test –timeout HUGEGRAPH_TIMEOUT export HUGEGRAPH_TIMEOUT=30 –trust-store-file HUGEGRAPH_TRUST_STORE_FILE export HUGEGRAPH_TRUST_STORE_FILE=/tmp/trust-store –trust-store-password HUGEGRAPH_TRUST_STORE_PASSWORD export HUGEGRAPH_TRUST_STORE_PASSWORD=xxxx
另一种方式是在 bin/hugegraph 脚本中设置环境变量:
#!/bin/bash
+
+# Set environment here if needed
+#export HUGEGRAPH_URL=
+#export HUGEGRAPH_GRAPH=
+#export HUGEGRAPH_USERNAME=
+#export HUGEGRAPH_PASSWORD=
+#export HUGEGRAPH_TIMEOUT=
+#export HUGEGRAPH_TRUST_STORE_FILE=
+#export HUGEGRAPH_TRUST_STORE_PASSWORD=
+
3.3 图管理类,graph-mode-set、graph-mode-get、graph-list、graph-get和graph-clear
- graph-mode-set,设置图的 restore mode
- –graph-mode 或者 -m,必填项,指定将要设置的模式,合法值包括 [NONE, RESTORING, MERGING, LOADING]
- graph-mode-get,获取图的 restore mode
- graph-list,列出某个 HugeGraph-Server 中全部的图
- graph-get,获取某个图及其存储后端类型
- graph-clear,清除某个图的全部 schema 和 data
- –confirm-message 或者 -c,必填项,删除确认信息,需要手动输入,二次确认防止误删,“I’m sure to delete all data”,包括双引号
当需要把备份的图原样恢复到一个新的图中的时候,需要先将图模式设置为 RESTORING 模式;当需要将备份的图合并到已存在的图中时,需要先将图模式设置为 MERGING 模式。
3.4 异步任务管理类,task-list、task-get和task-delete
- task-list,列出某个图中的异步任务,可以根据任务的状态过滤
- –status,选填项,指定要查看的任务的状态,即按状态过滤任务
- –limit,选填项,指定要获取的任务的数目,默认为 -1,意思为获取全部符合条件的任务
- task-get,获取某个异步任务的详细信息
- –task-id,必填项,指定异步任务的 ID
- task-delete,删除某个异步任务的信息
- –task-id,必填项,指定异步任务的 ID
- task-cancel,取消某个异步任务的执行
- –task-id,要取消的异步任务的 ID
- task-clear,清理完成的异步任务
- –force,选填项,设置时,表示清理全部异步任务,未执行完成的先取消,然后清除所有异步任务。默认只清理已完成的异步任务
3.5 Gremlin类,gremlin-execute和gremlin-schedule
- gremlin-execute,发送 Gremlin 语句到 HugeGraph-Server 来执行查询或修改操作,同步执行,结束后返回结果
- –file 或者 -f,指定要执行的脚本文件,UTF-8编码,与 –script 互斥
- –script 或者 -s,指定要执行的脚本字符串,与 –file 互斥
- –aliases 或者 -a,Gremlin 别名设置,格式为:key1=value1,key2=value2,…
- –bindings 或者 -b,Gremlin 绑定设置,格式为:key1=value1,key2=value2,…
- –language 或者 -l,Gremlin 脚本的语言,默认为 gremlin-groovy
–file 和 –script 二者互斥,必须设置其中之一
- gremlin-schedule,发送 Gremlin 语句到 HugeGraph-Server 来执行查询或修改操作,异步执行,任务提交后立刻返回异步任务id
- –file 或者 -f,指定要执行的脚本文件,UTF-8编码,与 –script 互斥
- –script 或者 -s,指定要执行的脚本字符串,与 –file 互斥
- –bindings 或者 -b,Gremlin 绑定设置,格式为:key1=value1,key2=value2,…
- –language 或者 -l,Gremlin 脚本的语言,默认为 gremlin-groovy
–file 和 –script 二者互斥,必须设置其中之一
3.6 备份/恢复类
- backup,将某张图中的 schema 或者 data 备份到 HugeGraph 系统之外,以 JSON 形式存在本地磁盘或者 HDFS
- –format,备份的格式,可选值包括 [json, text],默认为 json
- –all-properties,是否备份顶点/边全部的属性,仅在 –format 为 text 是有效,默认 false
- –label,要备份的顶点/边的类型,仅在 –format 为 text 是有效,只有备份顶点或者边的时候有效
- –properties,要备份的顶点/边的属性,逗号分隔,仅在 –format 为 text 是有效,只有备份顶点或者边的时候有效
- –compress,备份时是否压缩数据,默认为 true
- –directory 或者 -d,存储 schema 或者 data 的目录,本地目录时,默认为’./{graphName}’,HDFS 时,默认为 ‘{fs.default.name}/{graphName}’
- –huge-types 或者 -t,要备份的数据类型,逗号分隔,可选值为 ‘all’ 或者 一个或多个 [vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,‘all’ 代表全部6种类型,即顶点、边和所有schema
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- –split-size 或者 -s,指定在备份时对顶点或者边分块的大小,默认为 1048576
- -D,用 -Dkey=value 的模式指定动态参数,用来备份数据到 HDFS 时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
- restore,将 JSON 格式存储的 schema 或者 data 恢复到一个新图中(RESTORING 模式)或者合并到已存在的图中(MERGING 模式)
- –directory 或者 -d,存储 schema 或者 data 的目录,本地目录时,默认为’./{graphName}’,HDFS 时,默认为 ‘{fs.default.name}/{graphName}’
- –clean,是否在恢复图完成后删除 –directory 指定的目录,默认为 false
- –huge-types 或者 -t,要恢复的数据类型,逗号分隔,可选值为 ‘all’ 或者 一个或多个 [vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,‘all’ 代表全部6种类型,即顶点、边和所有schema
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- -D,用 -Dkey=value 的模式指定动态参数,用来从 HDFS 恢复图时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
只有当 –format 为 json 执行 backup 时,才可以使用 restore 命令恢复
- migrate, 将当前连接的图迁移至另一个 HugeGraphServer 中
- –target-graph,目标图的名字,默认为 hugegraph
- –target-url,目标图所在的 HugeGraphServer,默认为 http://127.0.0.1:8081
- –target-username,访问目标图的用户名
- –target-password,访问目标图的密码
- –target-timeout,访问目标图的超时时间
- –target-trust-store-file,访问目标图使用的 truststore 文件
- –target-trust-store-password,访问目标图使用的 truststore 的密码
- –directory 或者 -d,迁移过程中,存储源图的 schema 或者 data 的目录,本地目录时,默认为’./{graphName}’,HDFS 时,默认为 ‘{fs.default.name}/{graphName}’
- –huge-types 或者 -t,要迁移的数据类型,逗号分隔,可选值为 ‘all’ 或者 一个或多个 [vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,‘all’ 代表全部6种类型,即顶点、边和所有schema
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- –split-size 或者 -s,指定迁移过程中对源图进行备份时顶点或者边分块的大小,默认为 1048576
- -D,用 -Dkey=value 的模式指定动态参数,用来在迁移图过程中需要备份数据到 HDFS 时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
- –graph-mode 或者 -m,将源图恢复到目标图时将目标图设置的模式,合法值包括 [RESTORING, MERGING]
- –keep-local-data,是否保留在迁移图的过程中产生的源图的备份,默认为 false,即默认迁移图结束后不保留产生的源图备份
- schedule-backup,周期性对图执行备份操作,并保留一定数目的最新备份(目前仅支持本地文件系统)
- –directory 或者 -d,必填项,指定备份数据的目录
- –backup-num,选填项,指定保存的最新的备份的数目,默认为 3
- –interval,选填项,指定进行备份的周期,格式同 Linux crontab 格式
- dump,把整张图的顶点和边全部导出,默认以
vertex vertex-edge1 vertex-edge2...
JSON格式存储。
用户也可以自定义存储格式,只需要在hugegraph-tools/src/main/java/com/baidu/hugegraph/formatter
目录下实现一个继承自Formatter
的类,例如CustomFormatter
,使用时指定该类为formatter即可,例如
bin/hugegraph dump -f CustomFormatter
- –formatter 或者 -f,指定使用的 formatter,默认为 JsonFormatter
- –directory 或者 -d,存储 schema 或者 data 的目录,默认为当前目录
- –log 或者 -l,指定日志目录,默认为当前目录
- –retry,指定失败重试次数,默认为 3
- –split-size 或者 -s,指定在备份时对顶点或者边分块的大小,默认为 1048576
- -D,用 -Dkey=value 的模式指定动态参数,用来备份数据到 HDFS 时,指定 HDFS 的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
3.7 安装部署类
- deploy,一键下载、安装和启动 HugeGraph-Server 和 HugeGraph-Studio
- -v,必填项,指明安装的 HugeGraph-Server 和 HugeGraph-Studio 的版本号,最新的是 0.9
- -p,必填项,指定安装的 HugeGraph-Server 和 HugeGraph-Studio 目录
- -u,选填项,指定下载 HugeGraph-Server 和 HugeGraph-Studio 压缩包的链接
- clear,清理 HugeGraph-Server 和 HugeGraph-Studio 目录和tar包
- -p,必填项,指定要清理的 HugeGraph-Server 和 HugeGraph-Studio 的目录
- start-all,一键启动 HugeGraph-Server 和 HugeGraph-Studio,并启动监控,服务死掉时自动拉起服务
- -v,必填项,指明要启动的 HugeGraph-Server 和 HugeGraph-Studio 的版本号,最新的是 0.9
- -p,必填项,指定安装了 HugeGraph-Server 和 HugeGraph-Studio 的目录
- stop-all,一键关闭 HugeGraph-Server 和 HugeGraph-Studio
deploy命令中有可选参数 -u,提供时会使用指定的下载地址替代默认下载地址下载 tar 包,并且将地址写入~/hugegraph-download-url-prefix
文件中;之后如果不指定地址时,会优先从~/hugegraph-download-url-prefix
指定的地址下载 tar 包;如果 -u 和~/hugegraph-download-url-prefix
都没有时,会从默认下载地址进行下载
3.8 具体命令参数
各子命令的具体参数如下:
Usage: hugegraph [options] [command] [command options]
@@ -1010,7 +1012,7 @@
2.任务提交任务提交成功后,图区部分返回提交结果和任务ID
3.任务详情 提供【查看】入口,可跳转到任务详情查看当前任务具体执行情况跳转到任务中心后,直接显示当前执行的任务行 点击查看入口,跳转到任务管理列表,如下:
4.查看结果
- 结果通过json形式展示
3.5.4 OLAP算法任务
Hubble上暂未提供可视化的OLAP算法执行,可调用RESTful API进行OLAP类算法任务,在任务管理中通过ID找到相应任务,查看进度与结果等。
3.5.5 删除元数据、重建索引
1.创建任务
- 在元数据建模模块中,删除元数据时,可建立删除元数据的异步任务
- 在编辑已有的顶点/边类型操作中,新增索引时,可建立创建索引的异步任务
2.任务详情
- 确认/保存后,可跳转到任务中心查看当前任务的详情
5 - HugeGraph-Client Quick Start
1 HugeGraph-Client概述
HugeGraph-Client向HugeGraph-Server发出HTTP请求,获取并解析Server的执行结果。目前仅提供了Java版,用户可以使用HugeGraph-Client编写Java代码操作HugeGraph,比如元数据和图数据的增删改查,或者执行gremlin语句。
2 环境要求
- java 11 (兼容 java 8)
- maven 3.5+
3 使用流程
使用HugeGraph-Client的基本步骤如下:
- 新建Eclipse/ IDEA Maven项目;
- 在pom文件中添加HugeGraph-Client依赖;
- 创建类,调用HugeGraph-Client接口;
详细使用过程见下节完整示例。
4 完整示例
4.1 新建Maven工程
可以选择Eclipse或者Intellij Idea创建工程:
4.2 添加hugegraph-client依赖
添加hugegraph-client依赖
<dependencies>
<dependency>
- <groupId>com.baidu.hugegraph</groupId>
+ <groupId>org.apache.hugegraph</groupId>
<artifactId>hugegraph-client</artifactId>
<version>${version}</version>
</dependency>
@@ -1019,16 +1021,16 @@
import java.util.Iterator;
import java.util.List;
-import com.baidu.hugegraph.driver.GraphManager;
-import com.baidu.hugegraph.driver.GremlinManager;
-import com.baidu.hugegraph.driver.HugeClient;
-import com.baidu.hugegraph.driver.SchemaManager;
-import com.baidu.hugegraph.structure.constant.T;
-import com.baidu.hugegraph.structure.graph.Edge;
-import com.baidu.hugegraph.structure.graph.Path;
-import com.baidu.hugegraph.structure.graph.Vertex;
-import com.baidu.hugegraph.structure.gremlin.Result;
-import com.baidu.hugegraph.structure.gremlin.ResultSet;
+import org.apache.hugegraph.driver.GraphManager;
+import org.apache.hugegraph.driver.GremlinManager;
+import org.apache.hugegraph.driver.HugeClient;
+import org.apache.hugegraph.driver.SchemaManager;
+import org.apache.hugegraph.structure.constant.T;
+import org.apache.hugegraph.structure.graph.Edge;
+import org.apache.hugegraph.structure.graph.Path;
+import org.apache.hugegraph.structure.graph.Vertex;
+import org.apache.hugegraph.structure.gremlin.Result;
+import org.apache.hugegraph.structure.gremlin.ResultSet;
public class SingleExample {
@@ -1116,17 +1118,17 @@
.create();
GraphManager graph = hugeClient.graph();
- Vertex marko = graph.addVertex(T.label, "person", "name", "marko",
+ Vertex marko = graph.addVertex(T.LABEL, "person", "name", "marko",
"age", 29, "city", "Beijing");
- Vertex vadas = graph.addVertex(T.label, "person", "name", "vadas",
+ Vertex vadas = graph.addVertex(T.LABEL, "person", "name", "vadas",
"age", 27, "city", "Hongkong");
- Vertex lop = graph.addVertex(T.label, "software", "name", "lop",
+ Vertex lop = graph.addVertex(T.LABEL, "software", "name", "lop",
"lang", "java", "price", 328);
- Vertex josh = graph.addVertex(T.label, "person", "name", "josh",
+ Vertex josh = graph.addVertex(T.LABEL, "person", "name", "josh",
"age", 32, "city", "Beijing");
- Vertex ripple = graph.addVertex(T.label, "software", "name", "ripple",
+ Vertex ripple = graph.addVertex(T.LABEL, "software", "name", "ripple",
"lang", "java", "price", 199);
- Vertex peter = graph.addVertex(T.label, "person", "name", "peter",
+ Vertex peter = graph.addVertex(T.LABEL, "person", "name", "peter",
"age", 35, "city", "Shanghai");
marko.addEdge("knows", vadas, "date", "2016-01-10", "weight", 0.5);
@@ -1164,11 +1166,11 @@
4.3.2 BatchExample
import java.util.ArrayList;
import java.util.List;
-import com.baidu.hugegraph.driver.GraphManager;
-import com.baidu.hugegraph.driver.HugeClient;
-import com.baidu.hugegraph.driver.SchemaManager;
-import com.baidu.hugegraph.structure.graph.Edge;
-import com.baidu.hugegraph.structure.graph.Vertex;
+import org.apache.hugegraph.driver.GraphManager;
+import org.apache.hugegraph.driver.HugeClient;
+import org.apache.hugegraph.driver.SchemaManager;
+import org.apache.hugegraph.structure.graph.Edge;
+import org.apache.hugegraph.structure.graph.Vertex;
public class BatchExample {
@@ -1297,15 +1299,15 @@
hugeClient.close();
}
}
-
4.4 运行Example
运行Example之前需要启动Server, 启动过程见HugeGraph-Server Quick Start
4.5 Example示例说明
6 - HugeGraph-Computer Quick Start
1 HugeGraph-Computer 概述
HugeGraph-Computer
是分布式图处理系统 (OLAP). 它是 Pregel 的一个实现. 它可以运行在 Kubernetes 上。
特性
- 支持分布式MPP图计算,集成HugeGraph作为图输入输出存储。
- 算法基于BSP(Bulk Synchronous Parallel)模型,通过多次并行迭代进行计算,每一次迭代都是一次超步。
- 自动内存管理。该框架永远不会出现 OOM(内存不足),因为如果它没有足够的内存来容纳所有数据,它会将一些数据拆分到磁盘。
- 边的部分或超级节点的消息可以在内存中,所以你永远不会丢失它。
- 您可以从 HDFS 或 HugeGraph 或任何其他系统加载数据。
- 您可以将结果输出到 HDFS 或 HugeGraph,或任何其他系统。
- 易于开发新算法。您只需要像在单个服务器中一样专注于仅顶点处理,而不必担心消息传输和内存存储管理。
2 开始
2.1 在本地运行 PageRank 算法
要使用 HugeGraph-Computer 运行算法,您需要安装 64 位 Java 11 或更高版本。
还需要首先部署 HugeGraph-Server 和 Etcd.
有两种方式可以获取 HugeGraph-Computer:
- 下载已编译的压缩包
- 克隆源码编译打包
2.1 Download the compiled archive
下载最新版本的 HugeGraph-Computer release 包:
wget https://github.com/apache/hugegraph-computer/releases/download/v${version}/hugegraph-loader-${version}.tar.gz
+
4.4 运行Example
运行Example之前需要启动Server, 启动过程见HugeGraph-Server Quick Start
4.5 Example示例说明
6 - HugeGraph-Computer Quick Start
1 HugeGraph-Computer 概述
HugeGraph-Computer
是分布式图处理系统 (OLAP). 它是 Pregel 的一个实现. 它可以运行在 Kubernetes 上。
特性
- 支持分布式MPP图计算,集成HugeGraph作为图输入输出存储。
- 算法基于BSP(Bulk Synchronous Parallel)模型,通过多次并行迭代进行计算,每一次迭代都是一次超步。
- 自动内存管理。该框架永远不会出现 OOM(内存不足),因为如果它没有足够的内存来容纳所有数据,它会将一些数据拆分到磁盘。
- 边的部分或超级节点的消息可以在内存中,所以你永远不会丢失它。
- 您可以从 HDFS 或 HugeGraph 或任何其他系统加载数据。
- 您可以将结果输出到 HDFS 或 HugeGraph,或任何其他系统。
- 易于开发新算法。您只需要像在单个服务器中一样专注于仅顶点处理,而不必担心消息传输和内存存储管理。
2 开始
2.1 在本地运行 PageRank 算法
要使用 HugeGraph-Computer 运行算法,您需要安装 64 位 Java 11 或更高版本。
还需要首先部署 HugeGraph-Server 和 Etcd.
有两种方式可以获取 HugeGraph-Computer:
- 下载已编译的压缩包
- 克隆源码编译打包
2.1 Download the compiled archive
下载最新版本的 HugeGraph-Computer release 包:
wget https://github.com/apache/hugegraph-computer/releases/download/v${version}/hugegraph-loader-${version}.tar.gz
tar zxvf hugegraph-computer-${version}.tar.gz
2.2 Clone source code to compile and package
克隆最新版本的 HugeGraph-Computer 源码包:
$ git clone https://github.com/apache/hugegraph-computer.git
编译生成tar包:
cd hugegraph-computer
mvn clean package -DskipTests
2.3 启动 master 节点
您可以使用 -c
参数指定配置文件, 更多computer 配置请看: Computer Config Options
cd hugegraph-computer-${version}
bin/start-computer.sh -d local -r master
-
2.4 启动 worker 节点
bin/start-computer.sh -d local -r worker
-
2.5 查询算法结果
2.5.1 为 server 启用 OLAP
索引查询
如果没有启用OLAP索引,则需要启用, 更多参考: modify-graphs-read-mode
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
+
2.4 启动 worker 节点
bin/start-computer.sh -d local -r worker
+
2.5 查询算法结果
2.5.1 为 server 启用 OLAP
索引查询
如果没有启用OLAP索引,则需要启用, 更多参考: modify-graphs-read-mode
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
"ALL"
2.5.2 查询 page_rank
属性值:
curl "http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3" | gunzip
diff --git a/cn/docs/quickstart/hugegraph-client/index.html b/cn/docs/quickstart/hugegraph-client/index.html
index c385b174b..50837028f 100644
--- a/cn/docs/quickstart/hugegraph-client/index.html
+++ b/cn/docs/quickstart/hugegraph-client/index.html
@@ -4,17 +4,17 @@
新建Eclipse/ IDEA Maven项目; 在pom文件中添加HugeGraph-Client依赖; 创建类,调用HugeGraph-Client接口; 详细使用过程见下节完整示例。
4 完整示例 4.1 新建Maven工程 可以选择Eclipse或者Intellij Idea创建工程:
Eclipse新建Maven工程 Intellij Idea 创建maven工程 4.2 添加hugegraph-client依赖 添加hugegraph-client依赖
- com.baidu.hugegraph hugegraph-client ${version} 4.3 Example 4.3.1 SingleExample import java.io.IOException; import java.util.Iterator; import java.util.List; import com.baidu.hugegraph.driver.GraphManager; import com.baidu.hugegraph.driver.GremlinManager; import com.baidu.hugegraph.driver.HugeClient; import com.baidu.hugegraph.driver.SchemaManager; import com.baidu.hugegraph.structure.constant.T; import com.baidu.hugegraph.structure.graph.Edge; import com.baidu.hugegraph.structure.graph.Path; import com.baidu.hugegraph.structure.graph.Vertex; import com.baidu.hugegraph.structure.gremlin.Result; import com.">
+ org.apache.hugegraph hugegraph-client ${version} 4.3 Example 4.3.1 SingleExample import java.io.IOException; import java.util.Iterator; import java.util.List; import org.apache.hugegraph.driver.GraphManager; import org.apache.hugegraph.driver.GremlinManager; import org.apache.hugegraph.driver.HugeClient; import org.apache.hugegraph.driver.SchemaManager; import org.apache.hugegraph.structure.constant.T; import org.apache.hugegraph.structure.graph.Edge; import org.apache.hugegraph.structure.graph.Path; import org.apache.hugegraph.structure.graph.Vertex; import org.apache.hugegraph.structure.gremlin.Result; import org.">
HugeGraph-Client Quick Start
1 HugeGraph-Client概述
HugeGraph-Client向HugeGraph-Server发出HTTP请求,获取并解析Server的执行结果。目前仅提供了Java版,用户可以使用HugeGraph-Client编写Java代码操作HugeGraph,比如元数据和图数据的增删改查,或者执行gremlin语句。
2 环境要求
- java 11 (兼容 java 8)
- maven 3.5+
3 使用流程
使用HugeGraph-Client的基本步骤如下:
- 新建Eclipse/ IDEA Maven项目;
- 在pom文件中添加HugeGraph-Client依赖;
- 创建类,调用HugeGraph-Client接口;
详细使用过程见下节完整示例。
4 完整示例
4.1 新建Maven工程
可以选择Eclipse或者Intellij Idea创建工程:
4.2 添加hugegraph-client依赖
添加hugegraph-client依赖
<dependencies>
<dependency>
- <groupId>com.baidu.hugegraph</groupId>
+ <groupId>org.apache.hugegraph</groupId>
<artifactId>hugegraph-client</artifactId>
<version>${version}</version>
</dependency>
@@ -31,16 +31,16 @@
import java.util.Iterator;
import java.util.List;
-import com.baidu.hugegraph.driver.GraphManager;
-import com.baidu.hugegraph.driver.GremlinManager;
-import com.baidu.hugegraph.driver.HugeClient;
-import com.baidu.hugegraph.driver.SchemaManager;
-import com.baidu.hugegraph.structure.constant.T;
-import com.baidu.hugegraph.structure.graph.Edge;
-import com.baidu.hugegraph.structure.graph.Path;
-import com.baidu.hugegraph.structure.graph.Vertex;
-import com.baidu.hugegraph.structure.gremlin.Result;
-import com.baidu.hugegraph.structure.gremlin.ResultSet;
+import org.apache.hugegraph.driver.GraphManager;
+import org.apache.hugegraph.driver.GremlinManager;
+import org.apache.hugegraph.driver.HugeClient;
+import org.apache.hugegraph.driver.SchemaManager;
+import org.apache.hugegraph.structure.constant.T;
+import org.apache.hugegraph.structure.graph.Edge;
+import org.apache.hugegraph.structure.graph.Path;
+import org.apache.hugegraph.structure.graph.Vertex;
+import org.apache.hugegraph.structure.gremlin.Result;
+import org.apache.hugegraph.structure.gremlin.ResultSet;
public class SingleExample {
@@ -128,17 +128,17 @@
.create();
GraphManager graph = hugeClient.graph();
- Vertex marko = graph.addVertex(T.label, "person", "name", "marko",
+ Vertex marko = graph.addVertex(T.LABEL, "person", "name", "marko",
"age", 29, "city", "Beijing");
- Vertex vadas = graph.addVertex(T.label, "person", "name", "vadas",
+ Vertex vadas = graph.addVertex(T.LABEL, "person", "name", "vadas",
"age", 27, "city", "Hongkong");
- Vertex lop = graph.addVertex(T.label, "software", "name", "lop",
+ Vertex lop = graph.addVertex(T.LABEL, "software", "name", "lop",
"lang", "java", "price", 328);
- Vertex josh = graph.addVertex(T.label, "person", "name", "josh",
+ Vertex josh = graph.addVertex(T.LABEL, "person", "name", "josh",
"age", 32, "city", "Beijing");
- Vertex ripple = graph.addVertex(T.label, "software", "name", "ripple",
+ Vertex ripple = graph.addVertex(T.LABEL, "software", "name", "ripple",
"lang", "java", "price", 199);
- Vertex peter = graph.addVertex(T.label, "person", "name", "peter",
+ Vertex peter = graph.addVertex(T.LABEL, "person", "name", "peter",
"age", 35, "city", "Shanghai");
marko.addEdge("knows", vadas, "date", "2016-01-10", "weight", 0.5);
@@ -176,11 +176,11 @@
4.3.2 BatchExample
import java.util.ArrayList;
import java.util.List;
-import com.baidu.hugegraph.driver.GraphManager;
-import com.baidu.hugegraph.driver.HugeClient;
-import com.baidu.hugegraph.driver.SchemaManager;
-import com.baidu.hugegraph.structure.graph.Edge;
-import com.baidu.hugegraph.structure.graph.Vertex;
+import org.apache.hugegraph.driver.GraphManager;
+import org.apache.hugegraph.driver.HugeClient;
+import org.apache.hugegraph.driver.SchemaManager;
+import org.apache.hugegraph.structure.graph.Edge;
+import org.apache.hugegraph.structure.graph.Vertex;
public class BatchExample {
@@ -309,7 +309,7 @@
hugeClient.close();
}
}
-
4.4 运行Example
运行Example之前需要启动Server, 启动过程见HugeGraph-Server Quick Start
4.5 Example示例说明
Last modified January 1, 2023: enhance validate doc (#171) (89a0a1a)
+
4.4 运行Example
运行Example之前需要启动Server, 启动过程见HugeGraph-Server Quick Start
4.5 Example示例说明
Last modified May 18, 2023: Fix jump links (#233) (a78e546)
您可以使用
-c
参数指定配置文件, 更多computer 配置请看: Computer Config Options
cd hugegraph-computer-${version}
bin/start-computer.sh -d local -r master
-
bin/start-computer.sh -d local -r worker
-
2.5.1 为 server 启用 OLAP
索引查询
如果没有启用OLAP索引,则需要启用, 更多参考: modify-graphs-read-mode
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
+
bin/start-computer.sh -d local -r worker
+
2.5.1 为 server 启用 OLAP
索引查询
如果没有启用OLAP索引,则需要启用, 更多参考: modify-graphs-read-mode
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
"ALL"
2.5.2 查询 page_rank
属性值:
curl "http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3" | gunzip
diff --git a/cn/docs/quickstart/hugegraph-loader/index.html b/cn/docs/quickstart/hugegraph-loader/index.html
index 520cae7b8..1548de799 100644
--- a/cn/docs/quickstart/hugegraph-loader/index.html
+++ b/cn/docs/quickstart/hugegraph-loader/index.html
@@ -10,39 +10,39 @@
注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start
2 获取 HugeGraph-Loader 有两种方式可以获取 HugeGraph-Loader:
下载已编译的压缩包 克隆源码编译安装 2.1 下载已编译的压缩包 下载最新版本的 HugeGraph-Toolchain Release 包, 里面包含了 loader + tool + hubble 全套工具, 如果你已经下载, 可跳过重复步骤
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz tar zxf *hugegraph*.tar.gz 2.2 克隆源码编译安装 克隆最新版本的 HugeGraph-Loader 源码包:
-# 1. get from github git clone https://github.">
HugeGraph-Loader Quick Start
1 HugeGraph-Loader概述
HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
目前支持的数据源包括:
- 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
- HDFS 文件或目录,支持压缩文件
- 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server
本地磁盘文件和 HDFS 文件支持断点续传。
后面会具体说明。
注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start
2 获取 HugeGraph-Loader
有两种方式可以获取 HugeGraph-Loader:
- 下载已编译的压缩包
- 克隆源码编译安装
2.1 下载已编译的压缩包
下载最新版本的 HugeGraph-Toolchain Release 包, 里面包含了 loader + tool + hubble 全套工具, 如果你已经下载, 可跳过重复步骤
wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
+ Print entire section
HugeGraph-Loader Quick Start
1 HugeGraph-Loader概述
HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
目前支持的数据源包括:
- 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
- HDFS 文件或目录,支持压缩文件
- 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server
本地磁盘文件和 HDFS 文件支持断点续传。
后面会具体说明。
注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start
2 获取 HugeGraph-Loader
有两种方式可以获取 HugeGraph-Loader:
- 下载已编译的压缩包
- 克隆源码编译安装
2.1 下载已编译的压缩包
下载最新版本的 HugeGraph-Toolchain Release 包, 里面包含了 loader + tool + hubble 全套工具, 如果你已经下载, 可跳过重复步骤
wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
2.2 克隆源码编译安装
克隆最新版本的 HugeGraph-Loader 源码包:
# 1. get from github
git clone https://github.com/apache/hugegraph-toolchain.git
# 2. get from direct (e.g. here is 1.0.0, please choose the latest version)
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
由于Oracle ojdbc license的限制,需要手动安装ojdbc到本地maven仓库。
-访问Oracle jdbc 下载 页面。选择Oracle Database 12c Release 2 (12.2.0.1) drivers,如下图所示。
打开链接后,选择“ojdbc8.jar”, 如下图所示。
把ojdbc8安装到本地maven仓库,进入ojdbc8.jar
所在目录,执行以下命令。
mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
-
编译生成 tar 包:
cd hugegraph-loader
+访问Oracle jdbc 下载 页面。选择Oracle Database 12c Release 2 (12.2.0.1) drivers,如下图所示。打开链接后,选择“ojdbc8.jar”, 如下图所示。
把ojdbc8安装到本地maven仓库,进入ojdbc8.jar
所在目录,执行以下命令。
mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
+
编译生成 tar 包:
cd hugegraph-loader
mvn clean package -DskipTests
3 使用流程
使用 HugeGraph-Loader 的基本流程分为以下几步:
- 编写图模型
- 准备数据文件
- 编写输入源映射文件
- 执行命令导入
3.1 编写图模型
这一步是建模的过程,用户需要对自己已有的数据和想要创建的图模型有一个清晰的构想,然后编写 schema 建立图模型。
比如想创建一个拥有两类顶点及两类边的图,顶点是"人"和"软件",边是"人认识人"和"人创造软件",并且这些顶点和边都带有一些属性,比如顶点"人"有:“姓名”、“年龄"等属性,
“软件"有:“名字”、“售卖价格"等属性;边"认识"有: “日期"属性等。
示例图模型
在设计好了图模型之后,我们可以用groovy
编写出schema
的定义,并保存至文件中,这里命名为schema.groovy
。
// 创建一些属性
@@ -61,25 +61,25 @@
schema.edgeLabel("knows").sourceLabel("person").targetLabel("person").ifNotExist().create();
// 创建 created 边类型,这类边是从 person 指向 software 的
schema.edgeLabel("created").sourceLabel("person").targetLabel("software").ifNotExist().create();
-
关于 schema 的详细说明请参考 hugegraph-client 中对应部分。
3.2 准备数据
目前 HugeGraph-Loader 支持的数据源包括:
- 本地磁盘文件或目录
- HDFS 文件或目录
- 部分关系型数据库
3.2.1 数据源结构
3.2.1.1 本地磁盘文件或目录
用户可以指定本地磁盘文件作为数据源,如果数据分散在多个文件中,也支持以某个目录作为数据源,但暂时不支持以多个目录作为数据源。
比如:我的数据分散在多个文件中,part-0、part-1 … part-n,要想执行导入,必须保证它们是放在一个目录下的。然后在 loader 的映射文件中,将path
指定为该目录即可。
支持的文件格式包括:
- TEXT
- CSV
- JSON
TEXT 是自定义分隔符的文本文件,第一行通常是标题,记录了每一列的名称,也允许没有标题行(在映射文件中指定)。其余的每行代表一条记录,会被转化为一个顶点/边;行的每一列对应一个字段,会被转化为顶点/边的 id、label 或属性;
示例如下:
id|name|lang|price|ISBN
-1|lop|java|328|ISBN978-7-107-18618-5
-2|ripple|java|199|ISBN978-7-100-13678-5
-
CSV 是分隔符为逗号,
的 TEXT 文件,当列值本身包含逗号时,该列值需要用双引号包起来,如:
marko,29,Beijing
-"li,nary",26,"Wu,han"
-
JSON 文件要求每一行都是一个 JSON 串,且每行的格式需保持一致。
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
+
关于 schema 的详细说明请参考 hugegraph-client 中对应部分。
3.2 准备数据
目前 HugeGraph-Loader 支持的数据源包括:
- 本地磁盘文件或目录
- HDFS 文件或目录
- 部分关系型数据库
3.2.1 数据源结构
3.2.1.1 本地磁盘文件或目录
用户可以指定本地磁盘文件作为数据源,如果数据分散在多个文件中,也支持以某个目录作为数据源,但暂时不支持以多个目录作为数据源。
比如:我的数据分散在多个文件中,part-0、part-1 … part-n,要想执行导入,必须保证它们是放在一个目录下的。然后在 loader 的映射文件中,将path
指定为该目录即可。
支持的文件格式包括:
- TEXT
- CSV
- JSON
TEXT 是自定义分隔符的文本文件,第一行通常是标题,记录了每一列的名称,也允许没有标题行(在映射文件中指定)。其余的每行代表一条记录,会被转化为一个顶点/边;行的每一列对应一个字段,会被转化为顶点/边的 id、label 或属性;
示例如下:
id|name|lang|price|ISBN
+1|lop|java|328|ISBN978-7-107-18618-5
+2|ripple|java|199|ISBN978-7-100-13678-5
+
CSV 是分隔符为逗号,
的 TEXT 文件,当列值本身包含逗号时,该列值需要用双引号包起来,如:
marko,29,Beijing
+"li,nary",26,"Wu,han"
+
JSON 文件要求每一行都是一个 JSON 串,且每行的格式需保持一致。
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
{"source_name": "marko", "target_name": "josh", "date": "20130220", "weight": 1.0}
-
3.2.1.2 HDFS 文件或目录
用户也可以指定 HDFS 文件或目录作为数据源,上面关于本地磁盘文件或目录
的要求全部适用于这里。除此之外,鉴于 HDFS 上通常存储的都是压缩文件,loader 也提供了对压缩文件的支持,并且本地磁盘文件或目录
同样支持压缩文件。
目前支持的压缩文件类型包括:GZIP、BZ2、XZ、LZMA、SNAPPY_RAW、SNAPPY_FRAMED、Z、DEFLATE、LZ4_BLOCK、LZ4_FRAMED、ORC 和 PARQUET。
3.2.1.3 主流关系型数据库
loader 还支持以部分关系型数据库作为数据源,目前支持 MySQL、PostgreSQL、Oracle 和 SQL Server。
但目前对表结构要求较为严格,如果导入过程中需要做关联查询,这样的表结构是不允许的。关联查询的意思是:在读到表的某行后,发现某列的值不能直接使用(比如外键),需要再去做一次查询才能确定该列的真实值。
举个例子:假设有三张表,person、software 和 created
// person 表结构
-id | name | age | city
-
// software 表结构
-id | name | lang | price
-
// created 表结构
-id | p_id | s_id | date
-
如果在建模(schema)时指定 person 或 software 的 id 策略是 PRIMARY_KEY,选择以 name 作为 primary keys(注意:这是 hugegraph 中 vertexlabel 的概念),在导入边数据时,由于需要拼接出源顶点和目标顶点的 id,必须拿着 p_id/s_id 去 person/software 表中查到对应的 name,这种需要做额外查询的表结构的情况,loader 暂时是不支持的。这时可以采用以下两种方式替代:
- 仍然指定 person 和 software 的 id 策略为 PRIMARY_KEY,但是以 person 表和 software 表的 id 列作为顶点的主键属性,这样导入边时直接使用 p_id 和 s_id 和顶点的 label 拼接就能生成 id 了;
- 指定 person 和 software 的 id 策略为 CUSTOMIZE,然后直接以 person 表和 software 表的 id 列作为顶点 id,这样导入边时直接使用 p_id 和 s_id 即可;
关键点就是要让边能直接使用 p_id 和 s_id,不要再去查一次。
3.2.2 准备顶点和边数据
3.2.2.1 顶点数据
顶点数据文件由一行一行的数据组成,一般每一行作为一个顶点,每一列会作为顶点属性。下面以 CSV 格式作为示例进行说明。
- person 顶点数据(数据本身不包含 header)
Tom,48,Beijing
-Jerry,36,Shanghai
-
- software 顶点数据(数据本身包含 header)
name,price
-Photoshop,999
-Office,388
-
3.2.2.2 边数据
边数据文件由一行一行的数据组成,一般每一行作为一条边,其中有部分列会作为源顶点和目标顶点的 id,其他列作为边属性。下面以 JSON 格式作为示例进行说明。
- knows 边数据
{"source_name": "Tom", "target_name": "Jerry", "date": "2008-12-12"}
+
3.2.1.2 HDFS 文件或目录
用户也可以指定 HDFS 文件或目录作为数据源,上面关于本地磁盘文件或目录
的要求全部适用于这里。除此之外,鉴于 HDFS 上通常存储的都是压缩文件,loader 也提供了对压缩文件的支持,并且本地磁盘文件或目录
同样支持压缩文件。
目前支持的压缩文件类型包括:GZIP、BZ2、XZ、LZMA、SNAPPY_RAW、SNAPPY_FRAMED、Z、DEFLATE、LZ4_BLOCK、LZ4_FRAMED、ORC 和 PARQUET。
3.2.1.3 主流关系型数据库
loader 还支持以部分关系型数据库作为数据源,目前支持 MySQL、PostgreSQL、Oracle 和 SQL Server。
但目前对表结构要求较为严格,如果导入过程中需要做关联查询,这样的表结构是不允许的。关联查询的意思是:在读到表的某行后,发现某列的值不能直接使用(比如外键),需要再去做一次查询才能确定该列的真实值。
举个例子:假设有三张表,person、software 和 created
// person 表结构
+id | name | age | city
+
// software 表结构
+id | name | lang | price
+
// created 表结构
+id | p_id | s_id | date
+
如果在建模(schema)时指定 person 或 software 的 id 策略是 PRIMARY_KEY,选择以 name 作为 primary keys(注意:这是 hugegraph 中 vertexlabel 的概念),在导入边数据时,由于需要拼接出源顶点和目标顶点的 id,必须拿着 p_id/s_id 去 person/software 表中查到对应的 name,这种需要做额外查询的表结构的情况,loader 暂时是不支持的。这时可以采用以下两种方式替代:
- 仍然指定 person 和 software 的 id 策略为 PRIMARY_KEY,但是以 person 表和 software 表的 id 列作为顶点的主键属性,这样导入边时直接使用 p_id 和 s_id 和顶点的 label 拼接就能生成 id 了;
- 指定 person 和 software 的 id 策略为 CUSTOMIZE,然后直接以 person 表和 software 表的 id 列作为顶点 id,这样导入边时直接使用 p_id 和 s_id 即可;
关键点就是要让边能直接使用 p_id 和 s_id,不要再去查一次。
3.2.2 准备顶点和边数据
3.2.2.1 顶点数据
顶点数据文件由一行一行的数据组成,一般每一行作为一个顶点,每一列会作为顶点属性。下面以 CSV 格式作为示例进行说明。
- person 顶点数据(数据本身不包含 header)
Tom,48,Beijing
+Jerry,36,Shanghai
+
- software 顶点数据(数据本身包含 header)
name,price
+Photoshop,999
+Office,388
+
3.2.2.2 边数据
边数据文件由一行一行的数据组成,一般每一行作为一条边,其中有部分列会作为源顶点和目标顶点的 id,其他列作为边属性。下面以 JSON 格式作为示例进行说明。
- knows 边数据
{"source_name": "Tom", "target_name": "Jerry", "date": "2008-12-12"}
- created 边数据
{"source_name": "Tom", "target_name": "Photoshop"}
{"source_name": "Tom", "target_name": "Office"}
{"source_name": "Jerry", "target_name": "Office"}
@@ -392,21 +392,22 @@
当然如果修改后的数据行仍然有问题,则会被再次记录到失败文件中(不用担心会有重复行)。每个顶点映射或边映射有数据插入失败时都会产生自己的失败文件,失败文件又分为解析失败文件(后缀 .parse-error)和插入失败文件(后缀 .insert-error),
它们被保存在 ${struct}/current
目录下。比如映射文件中有一个顶点映射 person 和边映射 knows,它们各有一些错误行,当 Loader 退出后,在
${struct}/current
目录下会看到如下文件:
- person-b4cd32ab.parse-error: 顶点映射 person 解析错误的数据
- person-b4cd32ab.insert-error: 顶点映射 person 插入错误的数据
- knows-eb6b2bac.parse-error: 边映射 knows 解析错误的数据
- knows-eb6b2bac.insert-error: 边映射 knows 插入错误的数据
.parse-error 和 .insert-error 并不总是一起存在的,只有存在解析出错的行才会有 .parse-error 文件,只有存在插入出错的行才会有 .insert-error 文件。
3.4.3 logs 目录文件说明
程序执行过程中各日志及错误数据会写入 hugegraph-loader.log 文件中。
3.4.4 执行命令
运行 bin/hugegraph-loader 并传入参数
bin/hugegraph-loader -g {GRAPH_NAME} -f ${INPUT_DESC_FILE} -s ${SCHEMA_FILE} -h {HOST} -p {PORT}
-
4 完整示例
下面给出的是 hugegraph-loader 包中 example 目录下的例子。
4.1 准备数据
顶点文件:example/file/vertex_person.csv
marko,29,Beijing
-vadas,27,Hongkong
-josh,32,Beijing
-peter,35,Shanghai
-"li,nary",26,"Wu,han"
-
顶点文件:example/file/vertex_software.txt
name|lang|price
-lop|java|328
-ripple|java|199
-
边文件:example/file/edge_knows.json
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
-{"source_name": "marko", "target_name": "josh", "date": "20130220", "weight": 1.0}
-
边文件:example/file/edge_created.json
{"aname": "marko", "bname": "lop", "date": "20171210", "weight": 0.4}
-{"aname": "josh", "bname": "lop", "date": "20091111", "weight": 0.4}
-{"aname": "josh", "bname": "ripple", "date": "20171210", "weight": 1.0}
-{"aname": "peter", "bname": "lop", "date": "20170324", "weight": 0.2}
-
4.2 编写schema
schema文件:example/file/schema.groovy
schema.propertyKey("name").asText().ifNotExist().create();
+
4 完整示例
下面给出的是 hugegraph-loader 包中 example 目录下的例子。(GitHub 地址)
4.1 准备数据
顶点文件:example/file/vertex_person.csv
marko,29,Beijing
+vadas,27,Hongkong
+josh,32,Beijing
+peter,35,Shanghai
+"li,nary",26,"Wu,han"
+tom,null,NULL
+
顶点文件:example/file/vertex_software.txt
id|name|lang|price|ISBN
+1|lop|java|328|ISBN978-7-107-18618-5
+2|ripple|java|199|ISBN978-7-100-13678-5
+
边文件:example/file/edge_knows.json
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
+{"source_name": "marko", "target_name": "josh", "date": "20130220", "weight": 1.0}
+
边文件:example/file/edge_created.json
{"aname": "marko", "bname": "lop", "date": "20171210", "weight": 0.4}
+{"aname": "josh", "bname": "lop", "date": "20091111", "weight": 0.4}
+{"aname": "josh", "bname": "ripple", "date": "20171210", "weight": 1.0}
+{"aname": "peter", "bname": "lop", "date": "20170324", "weight": 0.2}
+
4.2 编写schema
schema文件:example/file/schema.groovy
schema.propertyKey("name").asText().ifNotExist().create();
schema.propertyKey("age").asInt().ifNotExist().create();
schema.propertyKey("city").asText().ifNotExist().create();
schema.propertyKey("weight").asDouble().ifNotExist().create();
@@ -417,7 +418,6 @@
schema.vertexLabel("person").properties("name", "age", "city").primaryKeys("name").ifNotExist().create();
schema.vertexLabel("software").properties("name", "lang", "price").primaryKeys("name").ifNotExist().create();
-schema.indexLabel("personByName").onV("person").by("name").secondary().ifNotExist().create();
schema.indexLabel("personByAge").onV("person").by("age").range().ifNotExist().create();
schema.indexLabel("personByCity").onV("person").by("city").secondary().ifNotExist().create();
schema.indexLabel("personByAgeAndCity").onV("person").by("age", "city").secondary().ifNotExist().create();
@@ -435,26 +435,27 @@
"label": "person",
"input": {
"type": "file",
- "path": "example/vertex_person.csv",
+ "path": "example/file/vertex_person.csv",
"format": "CSV",
"header": ["name", "age", "city"],
- "charset": "UTF-8"
+ "charset": "UTF-8",
+ "skipped_line": {
+ "regex": "(^#|^//).*"
+ }
},
- "mapping": {
- "name": "name",
- "age": "age",
- "city": "city"
- }
+ "null_values": ["NULL", "null", ""]
},
{
"label": "software",
"input": {
"type": "file",
- "path": "example/vertex_software.text",
+ "path": "example/file/vertex_software.txt",
"format": "TEXT",
"delimiter": "|",
"charset": "GBK"
- }
+ },
+ "id": "id",
+ "ignored": ["ISBN"]
}
],
"edges": [
@@ -464,50 +465,51 @@
"target": ["target_name"],
"input": {
"type": "file",
- "path": "example/edge_knows.json",
- "format": "JSON"
+ "path": "example/file/edge_knows.json",
+ "format": "JSON",
+ "date_format": "yyyyMMdd"
},
- "mapping": {
+ "field_mapping": {
"source_name": "name",
"target_name": "name"
}
},
{
"label": "created",
- "source": ["aname"],
- "target": ["bname"],
+ "source": ["source_name"],
+ "target": ["target_id"],
"input": {
"type": "file",
- "path": "example/edge_created.json",
- "format": "JSON"
+ "path": "example/file/edge_created.json",
+ "format": "JSON",
+ "date_format": "yyyy-MM-dd"
},
- "mapping": {
- "aname": "name",
- "bname": "name"
+ "field_mapping": {
+ "source_name": "name"
}
}
]
}
4.4 执行命令导入
sh bin/hugegraph-loader.sh -g hugegraph -f example/file/struct.json -s example/file/schema.groovy
-
导入结束后,会出现类似如下统计信息:
vertices/edges has been loaded this time : 8/6
---------------------------------------------------
-count metrics
- input read success : 14
- input read failure : 0
- vertex parse success : 8
- vertex parse failure : 0
- vertex insert success : 8
- vertex insert failure : 0
- edge parse success : 6
- edge parse failure : 0
- edge insert success : 6
- edge insert failure : 0
-
4.5 使用 spark-loader 导入
Spark 版本:Spark 3+, 其他版本未测试。
+
导入结束后,会出现类似如下统计信息:
vertices/edges has been loaded this time : 8/6
+--------------------------------------------------
+count metrics
+ input read success : 14
+ input read failure : 0
+ vertex parse success : 8
+ vertex parse failure : 0
+ vertex insert success : 8
+ vertex insert failure : 0
+ edge parse success : 6
+ edge parse failure : 0
+ edge insert success : 6
+ edge insert failure : 0
+
4.5 使用 spark-loader 导入
Spark 版本:Spark 3+, 其他版本未测试。
HugeGraph Toolchain 版本: toolchain-1.0.0
spark-loader
的参数分为两部分,注意:因二者参数名缩写存在重合部分,请使用参数全称。两种参数之间无需保证先后顺序。
- hugegraph 参数(参考:hugegraph-loader 参数说明 )
- Spark 任务提交参数 (参考:Submitting Applications)
示例:
sh bin/hugegraph-spark-loader.sh --master yarn \
--deploy-mode cluster --name spark-hugegraph-loader --file ./hugegraph.json \
--username admin --token admin --host xx.xx.xx.xx --port 8093 \
--graph graph-test --num-executors 6 --executor-cores 16 --executor-memory 15g
-
Last modified January 4, 2023: add mail template when validate the release & fix outdated url (#177) (4980820)
+
Last modified May 17, 2023: docs: sync loader example (#232) (cd23075)
HugeGraph-Server 是 HugeGraph 项目的核心部分,包含Core、Backend、API等子模块。
Core模块是Tinkerpop接口的实现,Backend模块用于管理数据存储,目前支持的后端包括:Memory、Cassandra、ScyllaDB以及RocksDB,API模块提供HTTP Server,将Client的HTTP请求转化为对Core的调用。
文档中会大量出现
HugeGraph-Server
及HugeGraphServer
这两种写法,其他组件也类似。这两种写法含义上并无大的差异,可以这么区分:HugeGraph-Server
表示服务端相关组件代码,HugeGraphServer
表示服务进程。
请优先考虑在 Java11 的环境上启动 HugeGraph-Server
, 目前同时保留对 Java8 的兼容
在往下阅读之前务必执行java -version
命令查看jdk版本
java -version
如果使用的是RocksDB后端,请务必执行gcc --version
命令查看gcc版本;若使用其他后端,则不需要。
gcc --version
-
有三种方式可以部署HugeGraph-Server组件:
HugeGraph-Tools 提供了一键部署的命令行工具,用户可以使用该工具快速地一键下载、解压、配置并启动 HugeGraph-Server 和 HugeGraph-Hubble +
有三种方式可以部署HugeGraph-Server组件:
HugeGraph-Tools 提供了一键部署的命令行工具,用户可以使用该工具快速地一键下载、解压、配置并启动 HugeGraph-Server 和 HugeGraph-Hubble 最新的 HugeGraph-Toolchain 中已经包含所有的这些工具, 直接下载它解压就有工具包集合了
# download toolchain package, it includes loader + tool + hubble, please check the latest version (here is 1.0.0)
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph-*.tar.gz
# enter the tool's package
cd *hugegraph*/*tool*
注:${version}为版本号,最新版本号可参考Download页面,或直接从Download页面点击链接下载
HugeGraph-Tools 的总入口脚本是bin/hugegraph
,用户可以使用help
子命令查看其用法,这里只介绍一键部署的命令。
bin/hugegraph deploy -v {hugegraph-version} -p {install-path} [-u {download-path-prefix}]
{hugegraph-version}
表示要部署的HugeGraphServer及HugeGraphStudio的版本,用户可查看conf/version-mapping.yaml
文件获取版本信息,{install-path}
指定HugeGraphServer及HugeGraphStudio的安装目录,{download-path-prefix}
可选,指定HugeGraphServer及HugeGraphStudio tar包的下载地址,不提供时使用默认下载地址,比如要启动 0.6 版本的HugeGraph-Server及HugeGraphStudio将上述命令写为bin/hugegraph deploy -v 0.6 -p services
即可。
# use the latest version, here is 1.0.0 for example
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
源码编译前请确保安装了wget命令
下载HugeGraph源代码
git clone https://github.com/apache/hugegraph.git
编译打包生成tar包
cd hugegraph
@@ -63,37 +63,37 @@
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
......
-
执行成功后,在hugegraph目录下生成 hugegraph-*.tar.gz 文件,就是编译生成的tar包。
如果需要快速启动HugeGraph仅用于测试,那么只需要进行少数几个配置项的修改即可(见下一节)。 +
执行成功后,在hugegraph目录下生成 hugegraph-*.tar.gz 文件,就是编译生成的tar包。
可参考Docker部署方式。
如果需要快速启动HugeGraph仅用于测试,那么只需要进行少数几个配置项的修改即可(见下一节)。 详细的配置介绍请参考配置文档及配置项介绍
启动分为"首次启动"和"非首次启动",这么区分是因为在第一次启动前需要初始化后端数据库,然后启动服务。 而在人为停掉服务后,或者其他原因需要再次启动服务时,因为后端数据库是持久化存在的,直接启动服务即可。
HugeGraphServer启动时会连接后端存储并尝试检查后端存储版本号,如果未初始化后端或者后端已初始化但版本不匹配时(旧版本数据),HugeGraphServer会启动失败,并给出错误信息。
如果需要外部访问HugeGraphServer,请修改rest-server.properties
的restserver.url
配置项
-(默认为http://127.0.0.1:8080
),修改成机器名或IP地址。
由于各种后端所需的配置(hugegraph.properties)及启动步骤略有不同,下面逐一对各后端的配置及启动做介绍。
修改 hugegraph.properties
backend=memory
-serializer=text
-
Memory后端的数据是保存在内存中无法持久化的,不需要初始化后端,这也是唯一一个不需要初始化的后端。
启动 server
bin/start-hugegraph.sh
+(默认为http://127.0.0.1:8080
),修改成机器名或IP地址。由于各种后端所需的配置(hugegraph.properties)及启动步骤略有不同,下面逐一对各后端的配置及启动做介绍。
5.1 Memory
修改 hugegraph.properties
backend=memory
+serializer=text
+
Memory后端的数据是保存在内存中无法持久化的,不需要初始化后端,这也是唯一一个不需要初始化的后端。
启动 server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
提示的 url 与 rest-server.properties 中配置的 restserver.url 一致
5.2 RocksDB
RocksDB是一个嵌入式的数据库,不需要手动安装部署, 要求 GCC 版本 >= 4.3.0(GLIBCXX_3.4.10),如不满足,需要提前升级 GCC
修改 hugegraph.properties
backend=rocksdb
-serializer=binary
-rocksdb.data_path=.
-rocksdb.wal_path=.
-
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
+
提示的 url 与 rest-server.properties 中配置的 restserver.url 一致
5.2 RocksDB
RocksDB是一个嵌入式的数据库,不需要手动安装部署, 要求 GCC 版本 >= 4.3.0(GLIBCXX_3.4.10),如不满足,需要提前升级 GCC
修改 hugegraph.properties
backend=rocksdb
+serializer=binary
+rocksdb.data_path=.
+rocksdb.wal_path=.
+
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
bin/init-store.sh
启动server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
5.3 Cassandra
用户需自行安装 Cassandra,要求版本 3.0 以上,下载地址
修改 hugegraph.properties
backend=cassandra
-serializer=cassandra
-
-# cassandra backend config
-cassandra.host=localhost
-cassandra.port=9042
-cassandra.username=
-cassandra.password=
-#cassandra.connect_timeout=5
-#cassandra.read_timeout=20
-
-#cassandra.keyspace.strategy=SimpleStrategy
-#cassandra.keyspace.replication=3
-
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
+
5.3 Cassandra
用户需自行安装 Cassandra,要求版本 3.0 以上,下载地址
修改 hugegraph.properties
backend=cassandra
+serializer=cassandra
+
+# cassandra backend config
+cassandra.host=localhost
+cassandra.port=9042
+cassandra.username=
+cassandra.password=
+#cassandra.connect_timeout=5
+#cassandra.read_timeout=20
+
+#cassandra.keyspace.strategy=SimpleStrategy
+#cassandra.keyspace.replication=3
+
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
bin/init-store.sh
Initing HugeGraph Store...
2017-12-01 11:26:51 1424 [main] [INFO ] com.baidu.hugegraph.HugeGraph [] - Opening backend store: 'cassandra'
@@ -115,36 +115,36 @@
启动server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
5.4 ScyllaDB
用户需自行安装 ScyllaDB,推荐版本 2.1 以上,下载地址
修改 hugegraph.properties
backend=scylladb
-serializer=scylladb
-
-# cassandra backend config
-cassandra.host=localhost
-cassandra.port=9042
-cassandra.username=
-cassandra.password=
-#cassandra.connect_timeout=5
-#cassandra.read_timeout=20
-
-#cassandra.keyspace.strategy=SimpleStrategy
-#cassandra.keyspace.replication=3
-
由于 scylladb 数据库本身就是基于 cassandra 的"优化版",如果用户未安装 scylladb ,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
+
5.4 ScyllaDB
用户需自行安装 ScyllaDB,推荐版本 2.1 以上,下载地址
修改 hugegraph.properties
backend=scylladb
+serializer=scylladb
+
+# cassandra backend config
+cassandra.host=localhost
+cassandra.port=9042
+cassandra.username=
+cassandra.password=
+#cassandra.connect_timeout=5
+#cassandra.read_timeout=20
+
+#cassandra.keyspace.strategy=SimpleStrategy
+#cassandra.keyspace.replication=3
+
由于 scylladb 数据库本身就是基于 cassandra 的"优化版",如果用户未安装 scylladb ,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
bin/init-store.sh
启动server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
5.5 HBase
用户需自行安装 HBase,要求版本 2.0 以上,下载地址
修改 hugegraph.properties
backend=hbase
-serializer=hbase
-
-# hbase backend config
-hbase.hosts=localhost
-hbase.port=2181
-# Note: recommend to modify the HBase partition number by the actual/env data amount & RS amount before init store
-# it may influence the loading speed a lot
-#hbase.enable_partition=true
-#hbase.vertex_partitions=10
-#hbase.edge_partitions=30
-
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
+
5.5 HBase
用户需自行安装 HBase,要求版本 2.0 以上,下载地址
修改 hugegraph.properties
backend=hbase
+serializer=hbase
+
+# hbase backend config
+hbase.hosts=localhost
+hbase.port=2181
+# Note: recommend to modify the HBase partition number by the actual/env data amount & RS amount before init store
+# it may influence the loading speed a lot
+#hbase.enable_partition=true
+#hbase.vertex_partitions=10
+#hbase.edge_partitions=30
+
初始化数据库(仅第一次启动时需要)
cd hugegraph-${version}
bin/init-store.sh
启动server
bin/start-hugegraph.sh
Starting HugeGraphServer...
@@ -154,11 +154,11 @@
curl
请求RESTfulAPI
echo `curl -o /dev/null -s -w %{http_code} "http://localhost:8080/graphs/hugegraph/graph/vertices"`
返回结果200,代表server启动正常
6.2 请求Server
HugeGraphServer的RESTful API包括多种类型的资源,典型的包括graph、schema、gremlin、traverser和task,
graph
包含vertices
、edges
schema
包含vertexlabels
、 propertykeys
、 edgelabels
、indexlabels
gremlin
包含各种Gremlin
语句,如g.v()
,可以同步或者异步执行traverser
包含各种高级查询,包括最短路径、交叉点、N步可达邻居等task
包含异步任务的查询和删除
6.2.1 获取hugegraph
的顶点及相关属性
curl http://localhost:8080/graphs/hugegraph/graph/vertices
说明
由于图的点和边很多,对于 list 型的请求,比如获取所有顶点,获取所有边等,Server 会将数据压缩再返回,
-所以使用 curl 时得到一堆乱码,可以重定向至 gunzip
进行解压。推荐使用 Chrome 浏览器 + Restlet 插件发送 HTTP 请求进行测试。
curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
-
当前HugeGraphServer的默认配置只能是本机访问,可以修改配置,使其能在其他机器访问。
vim conf/rest-server.properties
-
-restserver.url=http://0.0.0.0:8080
-
响应体如下:
{
+所以使用 curl 时得到一堆乱码,可以重定向至 gunzip
进行解压。推荐使用 Chrome 浏览器 + Restlet 插件发送 HTTP 请求进行测试。curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
+
当前HugeGraphServer的默认配置只能是本机访问,可以修改配置,使其能在其他机器访问。
vim conf/rest-server.properties
+
+restserver.url=http://0.0.0.0:8080
+
响应体如下:
{
"vertices": [
{
"id": "2lop",
@@ -207,9 +207,9 @@
...
]
}
-
详细的API请参考RESTful-API文档
7 停止Server
$cd hugegraph-${version}
+
详细的API请参考RESTful-API文档
7 停止Server
$cd hugegraph-${version}
$bin/stop-hugegraph.sh
-
Last modified February 15, 2023: optimize doc (#199) (c1a7c08)
HugeGraph-Tools 是 HugeGraph 的自动化部署、管理和备份/还原组件。
有两种方式可以获取 HugeGraph-Tools:(它被包含子 Toolchain 中)
下载最新版本的 HugeGraph-Toolchain 包, 然后进入 tools 子目录
wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
+ Print entire section
HugeGraph-Tools 是 HugeGraph 的自动化部署、管理和备份/还原组件。
有两种方式可以获取 HugeGraph-Tools:(它被包含子 Toolchain 中)
下载最新版本的 HugeGraph-Toolchain 包, 然后进入 tools 子目录
wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
源码编译前请确保安装了wget命令
下载最新版本的 HugeGraph-Toolchain 源码包, 然后根目录编译或者单独编译 tool 子模块:
# 1. get from github
git clone https://github.com/apache/hugegraph-toolchain.git
# 2. get from direct (e.g. here is 1.0.0, please choose the latest version)
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
编译生成 tar 包:
cd hugegraph-tools
mvn package -DskipTests
生成 tar 包 hugegraph-tools-${version}.tar.gz
解压后,进入 hugegraph-tools 目录,可以使用bin/hugegraph
或者bin/hugegraph help
来查看 usage 信息。主要分为:
Usage: hugegraph [options] [command] [command options]
-
options
是 HugeGraph-Tools 的全局变量,可以在 hugegraph-tools/bin/hugegraph 中配置,包括:
上述全局变量,也可以通过环境变量来设置。一种方式是在命令行使用 export 设置临时环境变量,在该命令行关闭之前均有效
全局变量 | 环境变量 | 示例 |
---|---|---|
–url | HUGEGRAPH_URL | export HUGEGRAPH_URL=http://127.0.0.1:8080 |
–graph | HUGEGRAPH_GRAPH | export HUGEGRAPH_GRAPH=hugegraph |
–user | HUGEGRAPH_USERNAME | export HUGEGRAPH_USERNAME=admin |
–password | HUGEGRAPH_PASSWORD | export HUGEGRAPH_PASSWORD=test |
–timeout | HUGEGRAPH_TIMEOUT | export HUGEGRAPH_TIMEOUT=30 |
–trust-store-file | HUGEGRAPH_TRUST_STORE_FILE | export HUGEGRAPH_TRUST_STORE_FILE=/tmp/trust-store |
–trust-store-password | HUGEGRAPH_TRUST_STORE_PASSWORD | export HUGEGRAPH_TRUST_STORE_PASSWORD=xxxx |
另一种方式是在 bin/hugegraph 脚本中设置环境变量:
#!/bin/bash
-
-# Set environment here if needed
-#export HUGEGRAPH_URL=
-#export HUGEGRAPH_GRAPH=
-#export HUGEGRAPH_USERNAME=
-#export HUGEGRAPH_PASSWORD=
-#export HUGEGRAPH_TIMEOUT=
-#export HUGEGRAPH_TRUST_STORE_FILE=
-#export HUGEGRAPH_TRUST_STORE_PASSWORD=
-
当需要把备份的图原样恢复到一个新的图中的时候,需要先将图模式设置为 RESTORING 模式;当需要将备份的图合并到已存在的图中时,需要先将图模式设置为 MERGING 模式。
–file 和 –script 二者互斥,必须设置其中之一
–file 和 –script 二者互斥,必须设置其中之一
只有当 –format 为 json 执行 backup 时,才可以使用 restore 命令恢复
vertex vertex-edge1 vertex-edge2...
JSON格式存储。
+options
是 HugeGraph-Tools 的全局变量,可以在 hugegraph-tools/bin/hugegraph 中配置,包括:
上述全局变量,也可以通过环境变量来设置。一种方式是在命令行使用 export 设置临时环境变量,在该命令行关闭之前均有效
全局变量 | 环境变量 | 示例 |
---|---|---|
–url | HUGEGRAPH_URL | export HUGEGRAPH_URL=http://127.0.0.1:8080 |
–graph | HUGEGRAPH_GRAPH | export HUGEGRAPH_GRAPH=hugegraph |
–user | HUGEGRAPH_USERNAME | export HUGEGRAPH_USERNAME=admin |
–password | HUGEGRAPH_PASSWORD | export HUGEGRAPH_PASSWORD=test |
–timeout | HUGEGRAPH_TIMEOUT | export HUGEGRAPH_TIMEOUT=30 |
–trust-store-file | HUGEGRAPH_TRUST_STORE_FILE | export HUGEGRAPH_TRUST_STORE_FILE=/tmp/trust-store |
–trust-store-password | HUGEGRAPH_TRUST_STORE_PASSWORD | export HUGEGRAPH_TRUST_STORE_PASSWORD=xxxx |
另一种方式是在 bin/hugegraph 脚本中设置环境变量:
#!/bin/bash
+
+# Set environment here if needed
+#export HUGEGRAPH_URL=
+#export HUGEGRAPH_GRAPH=
+#export HUGEGRAPH_USERNAME=
+#export HUGEGRAPH_PASSWORD=
+#export HUGEGRAPH_TIMEOUT=
+#export HUGEGRAPH_TRUST_STORE_FILE=
+#export HUGEGRAPH_TRUST_STORE_PASSWORD=
+
当需要把备份的图原样恢复到一个新的图中的时候,需要先将图模式设置为 RESTORING 模式;当需要将备份的图合并到已存在的图中时,需要先将图模式设置为 MERGING 模式。
–file 和 –script 二者互斥,必须设置其中之一
–file 和 –script 二者互斥,必须设置其中之一
只有当 –format 为 json 执行 backup 时,才可以使用 restore 命令恢复
vertex vertex-edge1 vertex-edge2...
JSON格式存储。
用户也可以自定义存储格式,只需要在hugegraph-tools/src/main/java/com/baidu/hugegraph/formatter
目录下实现一个继承自Formatter
的类,例如CustomFormatter
,使用时指定该类为formatter即可,例如
bin/hugegraph dump -f CustomFormatter
deploy命令中有可选参数 -u,提供时会使用指定的下载地址替代默认下载地址下载 tar 包,并且将地址写入
~/hugegraph-download-url-prefix
文件中;之后如果不指定地址时,会优先从~/hugegraph-download-url-prefix
指定的地址下载 tar 包;如果 -u 和~/hugegraph-download-url-prefix
都没有时,会从默认下载地址进行下载
各子命令的具体参数如下:
Usage: hugegraph [options] [command] [command options]
@@ -381,7 +381,7 @@
# 恢复图模式
./bin/hugegraph --url http://127.0.0.1:8080 --graph hugegraph graph-mode-set -m NONE
./bin/hugegraph --url http://127.0.0.1:8080 --graph hugegraph migrate --target-url http://127.0.0.1:8090 --target-graph hugegraph
-
This is the multi-page printable view of this section. -Click here to print.
Welcome to HugeGraph docs
HugeGraph is an easy-to-use, efficient, general-purpose open source graph database system(Graph Database, GitHub project address), +Click here to print.
Welcome to HugeGraph docs
HugeGraph is an easy-to-use, efficient, general-purpose open source graph database system(Graph Database, GitHub project address), implemented the Apache TinkerPop3 framework and is fully compatible with the Gremlin query language, -With complete toolchain components, it helps users to easily build applications and products based on graph databases. HugeGraph supports fast import of more than 10 billion vertices and edges, and provides millisecond-level relational query capability (OLTP). -It supports large-scale distributed graph computing (OLAP).
Typical application scenarios of HugeGraph include deep relationship exploration, association analysis, path search, feature extraction, data clustering, community detection, knowledge graph, etc., and are applicable to business fields such as network security, telecommunication fraud, financial risk control, advertising recommendation, social network and intelligence Robots etc.
Typical application scenarios of HugeGraph include deep relationship exploration, association analysis, path search, feature extraction, data clustering, community detection, knowledge graph, etc., and are applicable to business fields such as network security, telecommunication fraud, financial risk control, advertising recommendation, social network and intelligence Robots etc.
HugeGraph supports graph operations in online and offline environments, supports batch import of data, supports efficient complex relationship analysis, and can be seamlessly integrated with big data platforms. -HugeGraph supports multi-user parallel operations. Users can enter Gremlin query statements and get graph query results in time. They can also call HugeGraph API in user programs for graph analysis or query.
This system has the following features:
The functions of this system include but are not limited to:
The latest HugeGraph: 1.0.0, released on 2023-02-22(how to build from source).
components | description | download |
---|---|---|
HugeGraph-Server | The main program of HugeGraph | 1.0.0(alternate) |
HugeGraph-Toolchain | A collection of tools for graph data import/export/backup, web visualization, etc. | 1.0.0(alternate) |
Version | Release Date | server | toolchain | computer | Release Notes |
---|---|---|---|---|---|
1.0.0 | 2023-02-22 | [Binary] [Sign] [SHA512] | [Binary] [Sign] [SHA512] | [Binary] [Sign] [SHA512] | Release-Notes |
Version | Release Date | server | toolchain | computer | common | Release Notes |
---|---|---|---|---|---|---|
1.0.0 | 2023-02-22 | [Source] [Sign] [SHA512] | [Source] [Sign] [SHA512] | [Source] [Sign] [SHA512] | [Source] [Sign] [SHA512] | Release-Notes |
Note: The latest graph analysis and display platform is Hubble, which supports server v0.10 +.
HugeGraph-Server
is the core part of the HugeGraph Project, contains submodules such as Core、Backend、API.
The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include:Memory、Cassandra、ScyllaDB、RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.
There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows:
HugeGraph-Server
represents the code of server-related components,HugeGraphServer
represents the service process.
Consider use Java 11 to run HugeGraph-Server
(also compatible with Java 8 now), and configure by yourself.
Be sure to execute the java -version
command to check the jdk version before reading
If you are using the RocksDB
backend, be sure to execute the gcc --version
command to check its version; Not required if you are using other backends.
There are three ways to deploy HugeGraph-Server components:
HugeGraph-Tools
provides a command-line tool for one-click deployment, users can use this tool to quickly download、decompress、configure and start HugeGraphServer
and HugeGraph-Hubble
with one click.
Of course, you should download the tarball of HugeGraph-Toolchain
first.
# download toolchain binary package, it includes loader + tool + hubble
+With complete toolchain components, it helps users easily build applications and products based on graph databases. HugeGraph supports fast import of more than 10 billion vertices and edges, and provides millisecond-level relational query capability (OLTP).
+It supports large-scale distributed graph computing (OLAP).Typical application scenarios of HugeGraph include deep relationship exploration, association analysis, path search, feature extraction, data clustering, community detection, knowledge graph, etc., and are applicable to business fields such as network security, telecommunication fraud, financial risk control, advertising recommendation, social network, and intelligence Robots, etc.
Features
HugeGraph supports graph operations in online and offline environments, supports batch import of data, supports efficient complex relationship analysis, and can be seamlessly integrated with big data platforms.
+HugeGraph supports multi-user parallel operations. Users can enter Gremlin query statements and get graph query results in time. They can also call HugeGraph API in user programs for graph analysis or query.
This system has the following features:
- Ease of use: HugeGraph supports Gremlin graph query language and RESTful API, provides common interfaces for graph retrieval, and has peripheral tools with complete functions to easily implement various graph-based query and analysis operations.
- Efficiency: HugeGraph has been deeply optimized in graph storage and graph computing, and provides a variety of batch import tools, which can easily complete the rapid import of tens of billions of data, and achieve millisecond-level response for graph retrieval through optimized queries. Supports simultaneous online real-time operations of thousands of users.
- Universal: HugeGraph supports the Apache Gremlin standard graph query language and the Property Graph standard graph modeling method, and supports graph-based OLTP and OLAP schemes. Integrate Apache Hadoop and Apache Spark big data platforms.
- Scalable: supports distributed storage, multiple copies of data, and horizontal expansion, built-in multiple back-end storage engines, and can easily expand the back-end storage engine through plug-ins.
- Open: HugeGraph code is open source (Apache 2 License), customers can modify and customize independently, and selectively give back to the open-source community.
The functions of this system include but are not limited to:
- Supports batch import of data from multiple data sources (including local files, HDFS files, MySQL databases, and other data sources), and supports import of multiple file formats (including TXT, CSV, JSON, and other formats)
- With a visual operation interface, it can be used for operation, analysis, and display diagrams, reducing the threshold for users to use
- Optimized graph interface: shortest path (Shortest Path), K-step connected subgraph (K-neighbor), K-step to reach the adjacent point (K-out), personalized recommendation algorithm PersonalRank, etc.
- Implemented based on Apache TinkerPop3 framework, supports Gremlin graph query language
- Support attribute graph, attributes can be added to vertices and edges, and support rich attribute types
- Has independent schema metadata information, has powerful graph modeling capabilities, and facilitates third-party system integration
- Support multi-vertex ID strategy: support primary key ID, support automatic ID generation, support user-defined string ID, support user-defined digital ID
- The attributes of edges and vertices can be indexed to support precise query, range query, and full-text search
- The storage system adopts plug-in mode, supporting RocksDB, Cassandra, ScyllaDB, HBase, MySQL, PostgreSQL, Palo, and InMemory, etc.
- Integrate with big data systems such as Hadoop and Spark GraphX, and support Bulk Load operations
- Support high availability HA, multiple copies of data, backup recovery, monitoring, etc.
Modules
- HugeGraph-Server: HugeGraph-Server is the core part of the HugeGraph project, including submodules such as Core, Backend, and API;
- Core: Graph engine implementation, connecting the Backend module downward and supporting the API module upward;
- Backend: Realize the storage of graph data to the backend. The supported backends include: Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL, and PostgreSQL. Users can choose one according to the actual situation;
- API: Built-in REST Server, provides RESTful API to users, and is fully compatible with Gremlin query.
- HugeGraph-Client: HugeGraph-Client provides a RESTful API client for connecting to HugeGraph-Server. Currently, only Java version is implemented. Users of other languages can implement it by themselves;
- HugeGraph-Loader: HugeGraph-Loader is a data import tool based on HugeGraph-Client, which converts ordinary text data into graph vertices and edges and inserts them into graph database;
- HugeGraph-Computer: HugeGraph-Computer is a distributed graph processing system for HugeGraph (OLAP). It is an implementation of Pregel. It runs on the Kubernetes framework;
- HugeGraph-Hubble: HugeGraph-Hubble is HugeGraph’s web visualization management platform, a one-stop visual analysis platform. The platform covers the whole process from data modeling, to rapid data import, to online and offline analysis of data, and unified management of graphs;
- HugeGraph-Tools: HugeGraph-Tools is HugeGraph’s deployment and management tools, including functions such as managing graphs, backup/restore, Gremlin execution, etc.
Contact Us
- GitHub Issues: Feedback on usage issues and functional requirements (priority)
- Feedback Email: hugegraph@googlegroups.com
- WeChat public account: HugeGraph
The latest HugeGraph: 1.0.0, released on 2023-02-22(how to build from source).
components | description | download |
---|---|---|
HugeGraph-Server | The main program of HugeGraph | 1.0.0(alternate) |
HugeGraph-Toolchain | A collection of tools for graph data import/export/backup, web visualization, etc. | 1.0.0(alternate) |
Version | Release Date | server | toolchain | computer | Release Notes |
---|---|---|---|---|---|
1.0.0 | 2023-02-22 | [Binary] [Sign] [SHA512] | [Binary] [Sign] [SHA512] | [Binary] [Sign] [SHA512] | Release-Notes |
Version | Release Date | server | toolchain | computer | common | Release Notes |
---|---|---|---|---|---|---|
1.0.0 | 2023-02-22 | [Source] [Sign] [SHA512] | [Source] [Sign] [SHA512] | [Source] [Sign] [SHA512] | [Source] [Sign] [SHA512] | Release-Notes |
Note: The latest graph analysis and display platform is Hubble, which supports server v0.10 +.
HugeGraph-Server
is the core part of the HugeGraph Project, contains submodules such as Core、Backend、API.
The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include:Memory、Cassandra、ScyllaDB、RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.
There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows:
HugeGraph-Server
represents the code of server-related components,HugeGraphServer
represents the service process.
Consider use Java 11 to run HugeGraph-Server
(also compatible with Java 8 now), and configure by yourself.
Be sure to execute the java -version
command to check the jdk version before reading
If you are using the RocksDB
backend, be sure to execute the gcc --version
command to check its version; Not required if you are using other backends.
There are three ways to deploy HugeGraph-Server components:
HugeGraph-Tools
provides a command-line tool for one-click deployment, users can use this tool to quickly download、decompress、configure and start HugeGraphServer
and HugeGraph-Hubble
with one click.
Of course, you should download the tarball of HugeGraph-Toolchain
first.
# download toolchain binary package, it includes loader + tool + hubble
# please check the latest version (e.g. here is 1.0.0)
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph-*.tar.gz
# enter the tool's package
cd *hugegraph*/*tool*
note:${version} is the version, The latest version can refer to Download Page, or click the link to download directly from the Download page
The general entry script for HugeGraph-Tools is bin/hugegraph
, Users can use the help
command to view its usage, here only the commands for one-click deployment are introduced.
bin/hugegraph deploy -v {hugegraph-version} -p {install-path} [-u {download-path-prefix}]
{hugegraph-version}
indicates the version of HugeGraphServer and HugeGraphStudio to be deployed, users can view the conf/version-mapping.yaml
file for version information, {install-path}
specify the installation directory of HugeGraphServer and HugeGraphStudio, {download-path-prefix}
optional, specify the download address of HugeGraphServer and HugeGraphStudio tarball, use default download URL if not provided, for example, to start HugeGraph-Server and HugeGraphStudio version 0.6, write the above command as bin/hugegraph deploy -v 0.6 -p services
.
You could download the binary tarball from the download page of ASF site like this:
# use the latest version, here is 1.0.0 for example
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
# (Optional) verify the integrity with SHA512 (recommended)
shasum -a 512 apache-hugegraph-incubating-1.0.0.tar.gz
-curl https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz.sha512
+curl https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz.sha512
Please ensure that the wget command is installed before compiling the source code
We could get HugeGraph source code by 2 ways: (So as the other HugeGraph repos/modules)
# Way 1. download release package from the ASF site
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-src-1.0.0.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-src-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
# (Optional) verify the integrity with SHA512 (recommended)
shasum -a 512 apache-hugegraph-incubating-src-1.0.0.tar.gz
-curl https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0-src.tar.gz.sha512
+curl https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0-src.tar.gz.sha512
# Way2 : clone the latest code by git way (e.g GitHub)
git clone https://github.com/apache/hugegraph.git
@@ -50,37 +50,37 @@
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
......
-
After successful execution, *hugegraph-*.tar.gz
files will be generated in the hugegraph directory, which is the tarball generated by compilation.
If you need to quickly start HugeGraph just for testing, then you only need to modify a few configuration items (see next section). +
After successful execution, *hugegraph-*.tar.gz
files will be generated in the hugegraph directory, which is the tarball generated by compilation.
You can refer to Docker deployment guide.
If you need to quickly start HugeGraph just for testing, then you only need to modify a few configuration items (see next section). for detailed configuration introduction, please refer to configuration document and introduction to configuration items
The startup is divided into “first startup” and “non-first startup”. This distinction is because the back-end database needs to be initialized before the first startup, and then the service is started. after the service is stopped artificially, or when the service needs to be started again for other reasons, because the backend database is persistent, you can start the service directly.
When HugeGraphServer starts, it will connect to the backend storage and try to check the version number of the backend storage. If the backend is not initialized or the backend has been initialized but the version does not match (old version data), HugeGraphServer will fail to start and give an error message.
If you need to access HugeGraphServer externally, please modify the restserver.url
configuration item of rest-server.properties
-(default is http://127.0.0.1:8080
), change to machine name or IP address.
Since the configuration (hugegraph.properties) and startup steps required by various backends are slightly different, the following will introduce the configuration and startup of each backend one by one.
Update hugegraph.properties
backend=memory
-serializer=text
-
The data of the Memory backend is stored in memory and cannot be persisted. It does not need to initialize the backend. This is the only backend that does not require initialization.
Start server
bin/start-hugegraph.sh
+(default is http://127.0.0.1:8080
), change to machine name or IP address.Since the configuration (hugegraph.properties) and startup steps required by various backends are slightly different, the following will introduce the configuration and startup of each backend one by one.
5.1 Memory
Update hugegraph.properties
backend=memory
+serializer=text
+
The data of the Memory backend is stored in memory and cannot be persisted. It does not need to initialize the backend. This is the only backend that does not require initialization.
Start server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
The prompted url is the same as the restserver.url configured in rest-server.properties
5.2 RocksDB
RocksDB is an embedded database that does not require manual installation and deployment. GCC version >= 4.3.0 (GLIBCXX_3.4.10) is required. If not, GCC needs to be upgraded in advance
Update hugegraph.properties
backend=rocksdb
-serializer=binary
-rocksdb.data_path=.
-rocksdb.wal_path=.
-
Initialize the database (required only on first startup)
cd hugegraph-${version}
+
The prompted url is the same as the restserver.url configured in rest-server.properties
5.2 RocksDB
RocksDB is an embedded database that does not require manual installation and deployment. GCC version >= 4.3.0 (GLIBCXX_3.4.10) is required. If not, GCC needs to be upgraded in advance
Update hugegraph.properties
backend=rocksdb
+serializer=binary
+rocksdb.data_path=.
+rocksdb.wal_path=.
+
Initialize the database (required only on first startup)
cd hugegraph-${version}
bin/init-store.sh
Start server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
5.3 Cassandra
users need to install Cassandra by themselves, requiring version 3.0 or above, download link
Update hugegraph.properties
backend=cassandra
-serializer=cassandra
-
-# cassandra backend config
-cassandra.host=localhost
-cassandra.port=9042
-cassandra.username=
-cassandra.password=
-#cassandra.connect_timeout=5
-#cassandra.read_timeout=20
-
-#cassandra.keyspace.strategy=SimpleStrategy
-#cassandra.keyspace.replication=3
-
Initialize the database (required only on first startup)
cd hugegraph-${version}
+
5.3 Cassandra
users need to install Cassandra by themselves, requiring version 3.0 or above, download link
Update hugegraph.properties
backend=cassandra
+serializer=cassandra
+
+# cassandra backend config
+cassandra.host=localhost
+cassandra.port=9042
+cassandra.username=
+cassandra.password=
+#cassandra.connect_timeout=5
+#cassandra.read_timeout=20
+
+#cassandra.keyspace.strategy=SimpleStrategy
+#cassandra.keyspace.replication=3
+
Initialize the database (required only on first startup)
cd hugegraph-${version}
bin/init-store.sh
Initing HugeGraph Store...
2017-12-01 11:26:51 1424 [main] [INFO ] com.baidu.hugegraph.HugeGraph [] - Opening backend store: 'cassandra'
@@ -102,36 +102,36 @@
Start server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
5.4 ScyllaDB
users need to install ScyllaDB by themselves, version 2.1 or above is recommended, download link
Update hugegraph.properties
backend=scylladb
-serializer=scylladb
-
-# cassandra backend config
-cassandra.host=localhost
-cassandra.port=9042
-cassandra.username=
-cassandra.password=
-#cassandra.connect_timeout=5
-#cassandra.read_timeout=20
-
-#cassandra.keyspace.strategy=SimpleStrategy
-#cassandra.keyspace.replication=3
-
Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.
Initialize the database (required only on first startup)
cd hugegraph-${version}
+
5.4 ScyllaDB
users need to install ScyllaDB by themselves, version 2.1 or above is recommended, download link
Update hugegraph.properties
backend=scylladb
+serializer=scylladb
+
+# cassandra backend config
+cassandra.host=localhost
+cassandra.port=9042
+cassandra.username=
+cassandra.password=
+#cassandra.connect_timeout=5
+#cassandra.read_timeout=20
+
+#cassandra.keyspace.strategy=SimpleStrategy
+#cassandra.keyspace.replication=3
+
Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.
Initialize the database (required only on first startup)
cd hugegraph-${version}
bin/init-store.sh
Start server
bin/start-hugegraph.sh
Starting HugeGraphServer...
Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-
5.5 HBase
users need to install HBase by themselves, requiring version 2.0 or above,download link
Update hugegraph.properties
backend=hbase
-serializer=hbase
-
-# hbase backend config
-hbase.hosts=localhost
-hbase.port=2181
-# Note: recommend to modify the HBase partition number by the actual/env data amount & RS amount before init store
-# it may influence the loading speed a lot
-#hbase.enable_partition=true
-#hbase.vertex_partitions=10
-#hbase.edge_partitions=30
-
Initialize the database (required only on first startup)
cd hugegraph-${version}
+
5.5 HBase
users need to install HBase by themselves, requiring version 2.0 or above,download link
Update hugegraph.properties
backend=hbase
+serializer=hbase
+
+# hbase backend config
+hbase.hosts=localhost
+hbase.port=2181
+# Note: recommend to modify the HBase partition number by the actual/env data amount & RS amount before init store
+# it may influence the loading speed a lot
+#hbase.enable_partition=true
+#hbase.vertex_partitions=10
+#hbase.edge_partitions=30
+
Initialize the database (required only on first startup)
cd hugegraph-${version}
bin/init-store.sh
Start server
bin/start-hugegraph.sh
Starting HugeGraphServer...
@@ -140,71 +140,71 @@
6475 HugeGraphServer
curl
request RESTfulAPI
echo `curl -o /dev/null -s -w %{http_code} "http://localhost:8080/graphs/hugegraph/graph/vertices"`
Return 200, which means the server starts normally.
6.2 Request Server
The RESTful API of HugeGraphServer includes various types of resources, typically including graph, schema, gremlin, traverser and task.
graph
contains vertices
、edges
schema
contains vertexlabels
、 propertykeys
、 edgelabels
、indexlabels
gremlin
contains various Gremlin
statements, such as g.v()
, which can be executed synchronously or asynchronouslytraverser
contains various advanced queries including shortest paths, intersections, N-step reachable neighbors, etc.task
contains query and delete with asynchronous tasks
6.2.1 Get vertices and its related properties in hugegraph
curl http://localhost:8080/graphs/hugegraph/graph/vertices
-
explanation
Since there are many vertices and edges in the graph, for list-type requests, such as getting all vertices, getting all edges, etc., the server will compress the data and return it, so when use curl, you get a bunch of garbled characters, you can redirect to gunzip for decompression. It is recommended to use Chrome browser + Restlet plugin to send HTTP requests for testing.
curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
-
The current default configuration of HugeGraphServer can only be accessed locally, and the configuration can be modified so that it can be accessed on other machines.
vim conf/rest-server.properties
-
-restserver.url=http://0.0.0.0:8080
-
response body:
{
- "vertices": [
- {
- "id": "2lop",
- "label": "software",
- "type": "vertex",
- "properties": {
- "price": [
- {
- "id": "price",
- "value": 328
- }
- ],
- "name": [
- {
- "id": "name",
- "value": "lop"
- }
- ],
- "lang": [
- {
- "id": "lang",
- "value": "java"
- }
- ]
- }
- },
- {
- "id": "1josh",
- "label": "person",
- "type": "vertex",
- "properties": {
- "name": [
- {
- "id": "name",
- "value": "josh"
- }
- ],
- "age": [
- {
- "id": "age",
- "value": 32
- }
- ]
- }
- },
- ...
- ]
-}
-
For detailed API, please refer toRESTful-API
7 Stop Server
$cd hugegraph-${version}
+
explanation
Since there are many vertices and edges in the graph, for list-type requests, such as getting all vertices, getting all edges, etc., the server will compress the data and return it, so when use curl, you get a bunch of garbled characters, you can redirect to gunzip for decompression. It is recommended to use Chrome browser + Restlet plugin to send HTTP requests for testing.
curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
+
The current default configuration of HugeGraphServer can only be accessed locally, and the configuration can be modified so that it can be accessed on other machines.
vim conf/rest-server.properties
+
+restserver.url=http://0.0.0.0:8080
+
response body:
{
+ "vertices": [
+ {
+ "id": "2lop",
+ "label": "software",
+ "type": "vertex",
+ "properties": {
+ "price": [
+ {
+ "id": "price",
+ "value": 328
+ }
+ ],
+ "name": [
+ {
+ "id": "name",
+ "value": "lop"
+ }
+ ],
+ "lang": [
+ {
+ "id": "lang",
+ "value": "java"
+ }
+ ]
+ }
+ },
+ {
+ "id": "1josh",
+ "label": "person",
+ "type": "vertex",
+ "properties": {
+ "name": [
+ {
+ "id": "name",
+ "value": "josh"
+ }
+ ],
+ "age": [
+ {
+ "id": "age",
+ "value": 32
+ }
+ ]
+ }
+ },
+ ...
+ ]
+}
+
For detailed API, please refer to RESTful-API
7 Stop Server
$cd hugegraph-${version}
$bin/stop-hugegraph.sh
-
HugeGraph-Loader is the data import component of HugeGraph, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.
Currently supported data sources include:
Local disk files and HDFS files support resumable uploads.
It will be explained in detail below.
Note: HugeGraph-Loader requires HugeGraph Server service, please refer to HugeGraph-Server Quick Start to download and start Server
There are two ways to get HugeGraph-Loader:
Download the latest version of the HugeGraph-Toolchain release package:
wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
+
HugeGraph-Loader is the data import component of HugeGraph, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.
Currently supported data sources include:
Local disk files and HDFS files support resumable uploads.
It will be explained in detail below.
Note: HugeGraph-Loader requires HugeGraph Server service, please refer to HugeGraph-Server Quick Start to download and start Server
There are two ways to get HugeGraph-Loader:
Download the latest version of the HugeGraph-Toolchain release package:
wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
Clone the latest version of HugeGraph-Loader source package:
# 1. get from github
git clone https://github.com/apache/hugegraph-toolchain.git
# 2. get from direct (e.g. here is 1.0.0, please choose the latest version)
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
Due to the license limitation of the Oracle OJDBC
, you need to manually install ojdbc to the local maven repository.
-Visit the Oracle jdbc downloads page. Select Oracle Database 12c Release 2 (12.2.0.1) drivers, as shown in the following figure.
After opening the link, select “ojdbc8.jar” as shown below.
Install ojdbc8 to the local maven repository, enter the directory where ojdbc8.jar
is located, and execute the following command.
mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
-
Compile and generate tar package:
cd hugegraph-loader
+Visit the Oracle jdbc downloads page. Select Oracle Database 12c Release 2 (12.2.0.1) drivers, as shown in the following figure.After opening the link, select “ojdbc8.jar” as shown below.
Install ojdbc8 to the local maven repository, enter the directory where ojdbc8.jar
is located, and execute the following command.
mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
+
Compile and generate tar package:
cd hugegraph-loader
mvn clean package -DskipTests
3 How to use
The basic process of using HugeGraph-Loader is divided into the following steps:
- Write graph schema
- Prepare data files
- Write input source map files
- Execute command import
3.1 Construct graph schema
This step is the modeling process. Users need to have a clear idea of their existing data and the graph model they want to create, and then write the schema to build the graph model.
For example, if you want to create a graph with two types of vertices and two types of edges, the vertices are “people” and “software”, the edges are “people know people” and “people create software”, and these vertices and edges have some attributes, For example, the vertex “person” has: “name”, “age” and other attributes,
“Software” includes: “name”, “sale price” and other attributes; side “knowledge” includes: “date” attribute and so on.
graph model example
After designing the graph model, we can use groovy
to write the definition of schema
and save it to a file, here named schema.groovy
.
// Create some properties
@@ -223,25 +223,25 @@
schema.edgeLabel("knows").sourceLabel("person").targetLabel("person").ifNotExist().create();
// Create the created edge type, which points from person to software
schema.edgeLabel("created").sourceLabel("person").targetLabel("software").ifNotExist().create();
-
Please refer to the corresponding section in hugegraph-client for the detailed description of the schema.
3.2 Prepare data
The data sources currently supported by HugeGraph-Loader include:
- local disk file or directory
- HDFS file or directory
- Partial relational database
3.2.1 Data source structure
3.2.1.1 Local disk file or directory
The user can specify a local disk file as the data source. If the data is scattered in multiple files, a certain directory is also supported as the data source, but multiple directories are not supported as the data source for the time being.
For example: my data is scattered in multiple files, part-0, part-1 … part-n. To perform the import, it must be ensured that they are placed in one directory. Then in the loader’s mapping file, specify path
as the directory.
Supported file formats include:
- TEXT
- CSV
- JSON
TEXT is a text file with custom delimiters, the first line is usually the header, and the name of each column is recorded, and no header line is allowed (specified in the mapping file). Each remaining row represents a record, which will be converted into a vertex/edge; each column of the row corresponds to a field, which will be converted into the id, label or attribute of the vertex/edge;
An example is as follows:
id|name|lang|price|ISBN
-1|lop|java|328|ISBN978-7-107-18618-5
-2|ripple|java|199|ISBN978-7-100-13678-5
-
CSV is a TEXT file with commas ,
as delimiters. When a column value itself contains a comma, the column value needs to be enclosed in double quotes, for example:
marko,29,Beijing
-"li,nary",26,"Wu,han"
-
The JSON file requires that each line is a JSON string, and the format of each line needs to be consistent.
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
+
Please refer to the corresponding section in hugegraph-client for the detailed description of the schema.
3.2 Prepare data
The data sources currently supported by HugeGraph-Loader include:
- local disk file or directory
- HDFS file or directory
- Partial relational database
3.2.1 Data source structure
3.2.1.1 Local disk file or directory
The user can specify a local disk file as the data source. If the data is scattered in multiple files, a certain directory is also supported as the data source, but multiple directories are not supported as the data source for the time being.
For example: my data is scattered in multiple files, part-0, part-1 … part-n. To perform the import, it must be ensured that they are placed in one directory. Then in the loader’s mapping file, specify path
as the directory.
Supported file formats include:
- TEXT
- CSV
- JSON
TEXT is a text file with custom delimiters, the first line is usually the header, and the name of each column is recorded, and no header line is allowed (specified in the mapping file). Each remaining row represents a record, which will be converted into a vertex/edge; each column of the row corresponds to a field, which will be converted into the id, label or attribute of the vertex/edge;
An example is as follows:
id|name|lang|price|ISBN
+1|lop|java|328|ISBN978-7-107-18618-5
+2|ripple|java|199|ISBN978-7-100-13678-5
+
CSV is a TEXT file with commas ,
as delimiters. When a column value itself contains a comma, the column value needs to be enclosed in double quotes, for example:
marko,29,Beijing
+"li,nary",26,"Wu,han"
+
The JSON file requires that each line is a JSON string, and the format of each line needs to be consistent.
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
{"source_name": "marko", "target_name": "josh", "date": "20130220", "weight": 1.0}
-
3.2.1.2 HDFS file or directory
Users can also specify HDFS files or directories as data sources, all of the above requirements for local disk files or directories
apply here. In addition, since HDFS usually stores compressed files, loader also provides support for compressed files, and local disk file or directory
also supports compressed files.
Currently supported compressed file types include: GZIP, BZ2, XZ, LZMA, SNAPPY_RAW, SNAPPY_FRAMED, Z, DEFLATE, LZ4_BLOCK, LZ4_FRAMED, ORC, and PARQUET.
3.2.1.3 Mainstream relational database
The loader also supports some relational databases as data sources, and currently supports MySQL, PostgreSQL, Oracle and SQL Server.
However, the requirements for the table structure are relatively strict at present. If association query needs to be done during the import process, such a table structure is not allowed. The associated query means: after reading a row of the table, it is found that the value of a certain column cannot be used directly (such as a foreign key), and you need to do another query to determine the true value of the column.
For example: Suppose there are three tables, person, software and created
// person schema
-id | name | age | city
-
// software schema
-id | name | lang | price
-
// created schema
-id | p_id | s_id | date
-
If the id strategy of person or software is specified as PRIMARY_KEY when modeling (schema), choose name as the primary key (note: this is the concept of vertex-label in hugegraph), when importing edge data, the source vertex and target need to be spliced out. For the id of the vertex, you must go to the person/software table with p_id/s_id to find the corresponding name. In the case of the schema that requires additional query, the loader does not support it temporarily. In this case, the following two methods can be used instead:
- The id strategy of person and software is still specified as PRIMARY_KEY, but the id column of the person table and software table is used as the primary key attribute of the vertex, so that the id can be generated by directly splicing p_id and s_id with the label of the vertex when importing an edge;
- Specify the id policy of person and software as CUSTOMIZE, and then directly use the id column of the person table and the software table as the vertex id, so that p_id and s_id can be used directly when importing edges;
The key point is to make the edge use p_id and s_id directly, don’t check it again.
3.2.2 Prepare vertex and edge data
3.2.2.1 Vertex Data
The vertex data file consists of data line by line. Generally, each line is used as a vertex, and each column is used as a vertex attribute. The following description uses CSV format as an example.
- person vertex data (the data itself does not contain a header)
Tom,48,Beijing
-Jerry,36,Shanghai
-
- software vertex data (the data itself contains the header)
name,price
-Photoshop,999
-Office,388
-
3.2.2.2 Edge data
The edge data file consists of data line by line. Generally, each line is used as an edge. Some columns are used as the IDs of the source and target vertices, and other columns are used as edge attributes. The following uses JSON format as an example.
- knows edge data
{"source_name": "Tom", "target_name": "Jerry", "date": "2008-12-12"}
+
3.2.1.2 HDFS file or directory
Users can also specify HDFS files or directories as data sources, all of the above requirements for local disk files or directories
apply here. In addition, since HDFS usually stores compressed files, loader also provides support for compressed files, and local disk file or directory
also supports compressed files.
Currently supported compressed file types include: GZIP, BZ2, XZ, LZMA, SNAPPY_RAW, SNAPPY_FRAMED, Z, DEFLATE, LZ4_BLOCK, LZ4_FRAMED, ORC, and PARQUET.
3.2.1.3 Mainstream relational database
The loader also supports some relational databases as data sources, and currently supports MySQL, PostgreSQL, Oracle and SQL Server.
However, the requirements for the table structure are relatively strict at present. If association query needs to be done during the import process, such a table structure is not allowed. The associated query means: after reading a row of the table, it is found that the value of a certain column cannot be used directly (such as a foreign key), and you need to do another query to determine the true value of the column.
For example: Suppose there are three tables, person, software and created
// person schema
+id | name | age | city
+
// software schema
+id | name | lang | price
+
// created schema
+id | p_id | s_id | date
+
If the id strategy of person or software is specified as PRIMARY_KEY when modeling (schema), choose name as the primary key (note: this is the concept of vertex-label in hugegraph), when importing edge data, the source vertex and target need to be spliced out. For the id of the vertex, you must go to the person/software table with p_id/s_id to find the corresponding name. In the case of the schema that requires additional query, the loader does not support it temporarily. In this case, the following two methods can be used instead:
- The id strategy of person and software is still specified as PRIMARY_KEY, but the id column of the person table and software table is used as the primary key attribute of the vertex, so that the id can be generated by directly splicing p_id and s_id with the label of the vertex when importing an edge;
- Specify the id policy of person and software as CUSTOMIZE, and then directly use the id column of the person table and the software table as the vertex id, so that p_id and s_id can be used directly when importing edges;
The key point is to make the edge use p_id and s_id directly, don’t check it again.
3.2.2 Prepare vertex and edge data
3.2.2.1 Vertex Data
The vertex data file consists of data line by line. Generally, each line is used as a vertex, and each column is used as a vertex attribute. The following description uses CSV format as an example.
- person vertex data (the data itself does not contain a header)
Tom,48,Beijing
+Jerry,36,Shanghai
+
- software vertex data (the data itself contains the header)
name,price
+Photoshop,999
+Office,388
+
3.2.2.2 Edge data
The edge data file consists of data line by line. Generally, each line is used as an edge. Some columns are used as the IDs of the source and target vertices, and other columns are used as edge attributes. The following uses JSON format as an example.
- knows edge data
{"source_name": "Tom", "target_name": "Jerry", "date": "2008-12-12"}
- created edge data
{"source_name": "Tom", "target_name": "Photoshop"}
{"source_name": "Tom", "target_name": "Office"}
{"source_name": "Jerry", "target_name": "Office"}
@@ -553,21 +553,22 @@
In the failed file, after the user modifies the data lines in the failed file, set –reload-failure to true to import these “failed files” as input sources (does not affect the normal file import),
Of course, if there is still a problem with the modified data line, it will be logged again to the failure file (don’t worry about duplicate lines).Each vertex map or edge map will generate its own failure file when data insertion fails. The failure file is divided into a parsing failure file (suffix .parse-error) and an insertion failure file (suffix .insert-error).
They are stored in the ${struct}/current
directory. For example, there is a vertex mapping person and an edge mapping knows in the mapping file, each of which has some error lines. When the Loader exits, you will see the following files in the ${struct}/current
directory:
- person-b4cd32ab.parse-error: Vertex map person parses wrong data
- person-b4cd32ab.insert-error: Vertex map person inserts wrong data
- knows-eb6b2bac.parse-error: edge map knows parses wrong data
- knows-eb6b2bac.insert-error: edge map knows inserts wrong data
.parse-error and .insert-error do not always exist together. Only lines with parsing errors will have .parse-error files, and only lines with insertion errors will have .insert-error files.
3.4.3 logs directory file description
The log and error data during program execution will be written into hugegraph-loader.log file.
3.4.4 Execute command
Run bin/hugegraph-loader and pass in parameters
bin/hugegraph-loader -g {GRAPH_NAME} -f ${INPUT_DESC_FILE} -s ${SCHEMA_FILE} -h {HOST} -p {PORT}
-
4 Complete example
Given below is an example in the example directory of the hugegraph-loader package.
4.1 Prepare data
Vertex file: example/file/vertex_person.csv
marko,29,Beijing
-vadas,27,Hongkong
-josh,32,Beijing
-peter,35,Shanghai
-"li,nary",26,"Wu,han"
-
Vertex file: example/file/vertex_software.txt
name|lang|price
-lop|java|328
-ripple|java|199
-
Edge file: example/file/edge_knows.json
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
-{"source_name": "marko", "target_name": "josh", "date": "20130220", "weight": 1.0}
-
Edge file: example/file/edge_created.json
{"aname": "marko", "bname": "lop", "date": "20171210", "weight": 0.4}
-{"aname": "josh", "bname": "lop", "date": "20091111", "weight": 0.4}
-{"aname": "josh", "bname": "ripple", "date": "20171210", "weight": 1.0}
-{"aname": "peter", "bname": "lop", "date": "20170324", "weight": 0.2}
-
4.2 Write schema
schema file: example/file/schema.groovy
schema.propertyKey("name").asText().ifNotExist().create();
+
4 Complete example
Given below is an example in the example directory of the hugegraph-loader package.(GitHub address)
4.1 Prepare data
Vertex file: example/file/vertex_person.csv
marko,29,Beijing
+vadas,27,Hongkong
+josh,32,Beijing
+peter,35,Shanghai
+"li,nary",26,"Wu,han"
+tom,null,NULL
+
Vertex file: example/file/vertex_software.txt
id|name|lang|price|ISBN
+1|lop|java|328|ISBN978-7-107-18618-5
+2|ripple|java|199|ISBN978-7-100-13678-5
+
Edge file: example/file/edge_knows.json
{"source_name": "marko", "target_name": "vadas", "date": "20160110", "weight": 0.5}
+{"source_name": "marko", "target_name": "josh", "date": "20130220", "weight": 1.0}
+
Edge file: example/file/edge_created.json
{"aname": "marko", "bname": "lop", "date": "20171210", "weight": 0.4}
+{"aname": "josh", "bname": "lop", "date": "20091111", "weight": 0.4}
+{"aname": "josh", "bname": "ripple", "date": "20171210", "weight": 1.0}
+{"aname": "peter", "bname": "lop", "date": "20170324", "weight": 0.2}
+
4.2 Write schema
schema file: example/file/schema.groovy
schema.propertyKey("name").asText().ifNotExist().create();
schema.propertyKey("age").asInt().ifNotExist().create();
schema.propertyKey("city").asText().ifNotExist().create();
schema.propertyKey("weight").asDouble().ifNotExist().create();
@@ -578,7 +579,6 @@
schema.vertexLabel("person").properties("name", "age", "city").primaryKeys("name").ifNotExist().create();
schema.vertexLabel("software").properties("name", "lang", "price").primaryKeys("name").ifNotExist().create();
-schema.indexLabel("personByName").onV("person").by("name").secondary().ifNotExist().create();
schema.indexLabel("personByAge").onV("person").by("age").range().ifNotExist().create();
schema.indexLabel("personByCity").onV("person").by("city").secondary().ifNotExist().create();
schema.indexLabel("personByAgeAndCity").onV("person").by("age", "city").secondary().ifNotExist().create();
@@ -596,26 +596,27 @@
"label": "person",
"input": {
"type": "file",
- "path": "example/vertex_person.csv",
+ "path": "example/file/vertex_person.csv",
"format": "CSV",
"header": ["name", "age", "city"],
- "charset": "UTF-8"
+ "charset": "UTF-8",
+ "skipped_line": {
+ "regex": "(^#|^//).*"
+ }
},
- "mapping": {
- "name": "name",
- "age": "age",
- "city": "city"
- }
+ "null_values": ["NULL", "null", ""]
},
{
"label": "software",
"input": {
"type": "file",
- "path": "example/vertex_software.text",
+ "path": "example/file/vertex_software.txt",
"format": "TEXT",
"delimiter": "|",
"charset": "GBK"
- }
+ },
+ "id": "id",
+ "ignored": ["ISBN"]
}
],
"edges": [
@@ -625,71 +626,72 @@
"target": ["target_name"],
"input": {
"type": "file",
- "path": "example/edge_knows.json",
- "format": "JSON"
+ "path": "example/file/edge_knows.json",
+ "format": "JSON",
+ "date_format": "yyyyMMdd"
},
- "mapping": {
+ "field_mapping": {
"source_name": "name",
"target_name": "name"
}
},
{
"label": "created",
- "source": ["aname"],
- "target": ["bname"],
+ "source": ["source_name"],
+ "target": ["target_id"],
"input": {
"type": "file",
- "path": "example/edge_created.json",
- "format": "JSON"
+ "path": "example/file/edge_created.json",
+ "format": "JSON",
+ "date_format": "yyyy-MM-dd"
},
- "mapping": {
- "aname": "name",
- "bname": "name"
+ "field_mapping": {
+ "source_name": "name"
}
}
]
}
4.4 Command to import
sh bin/hugegraph-loader.sh -g hugegraph -f example/file/struct.json -s example/file/schema.groovy
-
After the import is complete, statistics similar to the following will appear:
vertices/edges has been loaded this time : 8/6
---------------------------------------------------
-count metrics
- input read success : 14
- input read failure : 0
- vertex parse success : 8
- vertex parse failure : 0
- vertex insert success : 8
- vertex insert failure : 0
- edge parse success : 6
- edge parse failure : 0
- edge insert success : 6
- edge insert failure : 0
-
4.5 Import data by spark-loader
Spark version: Spark 3+, other versions has not been tested.
HugeGraph Toolchain version: toolchain-1.0.0
The parameters of spark-loader
are divided into two parts. Note: Because the abbreviations of
+
After the import is complete, statistics similar to the following will appear:
vertices/edges has been loaded this time : 8/6
+--------------------------------------------------
+count metrics
+ input read success : 14
+ input read failure : 0
+ vertex parse success : 8
+ vertex parse failure : 0
+ vertex insert success : 8
+ vertex insert failure : 0
+ edge parse success : 6
+ edge parse failure : 0
+ edge insert success : 6
+ edge insert failure : 0
+
4.5 Import data by spark-loader
Spark version: Spark 3+, other versions has not been tested.
HugeGraph Toolchain version: toolchain-1.0.0
The parameters of spark-loader
are divided into two parts. Note: Because the abbreviations of
these two parameter names have overlapping parts, please use the full name of the parameter.
And there is no need to guarantee the order between the two parameters.
- hugegraph parameters (Reference: hugegraph-loader parameter description )
- Spark task submission parameters (Reference: Submitting Applications)
Example:
sh bin/hugegraph-spark-loader.sh --master yarn \
--deploy-mode cluster --name spark-hugegraph-loader --file ./hugegraph.json \
--username admin --token admin --host xx.xx.xx.xx --port 8093 \
--graph graph-test --num-executors 6 --executor-cores 16 --executor-memory 15g
-
HugeGraph-Tools is an automated deployment, management and backup/restore component of HugeGraph.
There are two ways to get HugeGraph-Tools:
Download the latest version of the HugeGraph-Toolchain package:
wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
+
HugeGraph-Tools is an automated deployment, management and backup/restore component of HugeGraph.
There are two ways to get HugeGraph-Tools:
Download the latest version of the HugeGraph-Toolchain package:
wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
Please ensure that the wget command is installed before compiling the source code
Download the latest version of the HugeGraph-Tools source package:
# 1. get from github
git clone https://github.com/apache/hugegraph-toolchain.git
# 2. get from direct (e.g. here is 1.0.0, please choose the latest version)
-wget https://dist.apache.org/repos/dist/dev/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0-src.tar.gz
Compile and generate tar package:
cd hugegraph-tools
mvn package -DskipTests
Generate tar package hugegraph-tools-${version}.tar.gz
After decompression, enter the hugegraph-tools directory, you can use bin/hugegraph
or bin/hugegraph help
to view the usage information. mainly divided:
Usage: hugegraph [options] [command] [command options]
-
options
is a global variable of HugeGraph-Tools, which can be configured in hugegraph-tools/bin/hugegraph, including:
The above global variables can also be set through environment variables. One way is to use export on the command line to set temporary environment variables, which are valid until the command line is closed
Global Variable | Environment Variable | Example |
---|---|---|
–url | HUGEGRAPH_URL | export HUGEGRAPH_URL=http://127.0.0.1:8080 |
–graph | HUGEGRAPH_GRAPH | export HUGEGRAPH_GRAPH=hugegraph |
–user | HUGEGRAPH_USERNAME | export HUGEGRAPH_USERNAME=admin |
–password | HUGEGRAPH_PASSWORD | export HUGEGRAPH_PASSWORD=test |
–timeout | HUGEGRAPH_TIMEOUT | export HUGEGRAPH_TIMEOUT=30 |
–trust-store-file | HUGEGRAPH_TRUST_STORE_FILE | export HUGEGRAPH_TRUST_STORE_FILE=/tmp/trust-store |
–trust-store-password | HUGEGRAPH_TRUST_STORE_PASSWORD | export HUGEGRAPH_TRUST_STORE_PASSWORD=xxxx |
Another way is to set the environment variable in the bin/hugegraph script:
#!/bin/bash
-
-# Set environment here if needed
-#export HUGEGRAPH_URL=
-#export HUGEGRAPH_GRAPH=
-#export HUGEGRAPH_USERNAME=
-#export HUGEGRAPH_PASSWORD=
-#export HUGEGRAPH_TIMEOUT=
-#export HUGEGRAPH_TRUST_STORE_FILE=
-#export HUGEGRAPH_TRUST_STORE_PASSWORD=
-
When you need to restore the backup graph to a new graph, you need to set the graph mode to RESTORING mode; when you need to merge the backup graph into an existing graph, you need to first set the graph mode to MERGING model.
–file and –script are mutually exclusive, one of them must be set
–file and –script are mutually exclusive, one of them must be set
restore command can be used only if –format is executed as backup for json
vertex vertex-edge1 vertex-edge2...
JSON format by default.
+options
is a global variable of HugeGraph-Tools, which can be configured in hugegraph-tools/bin/hugegraph, including:
The above global variables can also be set through environment variables. One way is to use export on the command line to set temporary environment variables, which are valid until the command line is closed
Global Variable | Environment Variable | Example |
---|---|---|
–url | HUGEGRAPH_URL | export HUGEGRAPH_URL=http://127.0.0.1:8080 |
–graph | HUGEGRAPH_GRAPH | export HUGEGRAPH_GRAPH=hugegraph |
–user | HUGEGRAPH_USERNAME | export HUGEGRAPH_USERNAME=admin |
–password | HUGEGRAPH_PASSWORD | export HUGEGRAPH_PASSWORD=test |
–timeout | HUGEGRAPH_TIMEOUT | export HUGEGRAPH_TIMEOUT=30 |
–trust-store-file | HUGEGRAPH_TRUST_STORE_FILE | export HUGEGRAPH_TRUST_STORE_FILE=/tmp/trust-store |
–trust-store-password | HUGEGRAPH_TRUST_STORE_PASSWORD | export HUGEGRAPH_TRUST_STORE_PASSWORD=xxxx |
Another way is to set the environment variable in the bin/hugegraph script:
#!/bin/bash
+
+# Set environment here if needed
+#export HUGEGRAPH_URL=
+#export HUGEGRAPH_GRAPH=
+#export HUGEGRAPH_USERNAME=
+#export HUGEGRAPH_PASSWORD=
+#export HUGEGRAPH_TIMEOUT=
+#export HUGEGRAPH_TRUST_STORE_FILE=
+#export HUGEGRAPH_TRUST_STORE_PASSWORD=
+
When you need to restore the backup graph to a new graph, you need to set the graph mode to RESTORING mode; when you need to merge the backup graph into an existing graph, you need to first set the graph mode to MERGING model.
–file and –script are mutually exclusive, one of them must be set
–file and –script are mutually exclusive, one of them must be set
restore command can be used only if –format is executed as backup for json
vertex vertex-edge1 vertex-edge2...
JSON format by default.
Users can also customize the storage format, just need to be in hugegraph-tools/src/main/java/com/baidu/hugegraph/formatter
Implement a class inherited from Formatter
in the directory, such as CustomFormatter
, and specify this class as formatter when using it, for example
bin/hugegraph dump -f CustomFormatter
There is an optional parameter -u in the deploy command. When provided, the specified download address will be used instead of the default download address to download the tar package, and the address will be written into the
~/hugegraph-download-url-prefix
file; if no address is specified later When -u and~/hugegraph-download-url-prefix
are not specified, the tar package will be downloaded from the address specified by~/hugegraph-download-url-prefix
; if there is neither -u nor~/hugegraph-download-url-prefix
, it will be downloaded from the default download address
The specific parameters of each subcommand are as follows:
Usage: hugegraph [options] [command] [command options]
@@ -1024,7 +1026,7 @@
./bin/hugegraph --url http://127.0.0.1:8080 --graph hugegraph migrate --target-url http://127.0.0.1:8090 --target-graph hugegraph
HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.
HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.
The platform mainly includes the following modules:
The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.
The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.
Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.
By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.
For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.
The module usage process of the platform is as follows:
Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.
Create graph by filling in the content as follows::
Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.
Left navigation:
List mode:
Graph mode:
Select reuse items:
Check reuse items:
List mode:
Graph mode:
Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.
You can delete a single item or delete it in batches.
List mode:
Graph mode:
Displays vertex and edge indices for vertex types and edge types.
The usage process of data import is as follows:
Left navigation:
Set up data mapping for uploaded files, including file settings and type settings
File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually
Type setting:
Vertex map and edge map:
【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;
【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;
Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.
After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.
Fill in the settings map:
Mapping list:
Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.
Left navigation:
By switching the entrance on the left, flexibly switch the operation space of multiple graphs
HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.
After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].
Support zoom, center, full screen, export and other operations.
【Picture Mode】
【Table mode】
【Json mode】
Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.
In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.
Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.
Double-clicking a vertex also displays the vertex associated with the selected point.
In the graph area, two entries can be used to dynamically add vertices, as follows:
Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.
The entry is as follows:
Add the vertex content as follows:
Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.
Left navigation:
Click to view the entry to jump to the task management list, as follows:
There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.
HugeGraph-Client sends HTTP request to HugeGraph-Server to obtain and parse the execution result of Server. Currently only the HugeGraph-Client for Java is provided. You can use HugeGraph-Client to write Java code to operate HugeGraph, such as adding, deleting, modifying, and querying schema and graph data, or executing gremlin statements.
The basic steps to use HugeGraph-Client are as follows:
See the complete example in the following section for the detail.
Using IDEA or Eclipse to create the project:
<dependencies>
<dependency>
- <groupId>com.baidu.hugegraph</groupId>
+ <groupId>org.apache.hugegraph</groupId>
<artifactId>hugegraph-client</artifactId>
<version>${version}</version>
</dependency>
@@ -1033,16 +1035,16 @@
import java.util.Iterator;
import java.util.List;
-import com.baidu.hugegraph.driver.GraphManager;
-import com.baidu.hugegraph.driver.GremlinManager;
-import com.baidu.hugegraph.driver.HugeClient;
-import com.baidu.hugegraph.driver.SchemaManager;
-import com.baidu.hugegraph.structure.constant.T;
-import com.baidu.hugegraph.structure.graph.Edge;
-import com.baidu.hugegraph.structure.graph.Path;
-import com.baidu.hugegraph.structure.graph.Vertex;
-import com.baidu.hugegraph.structure.gremlin.Result;
-import com.baidu.hugegraph.structure.gremlin.ResultSet;
+import org.apache.hugegraph.driver.GraphManager;
+import org.apache.hugegraph.driver.GremlinManager;
+import org.apache.hugegraph.driver.HugeClient;
+import org.apache.hugegraph.driver.SchemaManager;
+import org.apache.hugegraph.structure.constant.T;
+import org.apache.hugegraph.structure.graph.Edge;
+import org.apache.hugegraph.structure.graph.Path;
+import org.apache.hugegraph.structure.graph.Vertex;
+import org.apache.hugegraph.structure.gremlin.Result;
+import org.apache.hugegraph.structure.gremlin.ResultSet;
public class SingleExample {
@@ -1130,17 +1132,17 @@
.create();
GraphManager graph = hugeClient.graph();
- Vertex marko = graph.addVertex(T.label, "person", "name", "marko",
+ Vertex marko = graph.addVertex(T.LABEL, "person", "name", "marko",
"age", 29, "city", "Beijing");
- Vertex vadas = graph.addVertex(T.label, "person", "name", "vadas",
+ Vertex vadas = graph.addVertex(T.LABEL, "person", "name", "vadas",
"age", 27, "city", "Hongkong");
- Vertex lop = graph.addVertex(T.label, "software", "name", "lop",
+ Vertex lop = graph.addVertex(T.LABEL, "software", "name", "lop",
"lang", "java", "price", 328);
- Vertex josh = graph.addVertex(T.label, "person", "name", "josh",
+ Vertex josh = graph.addVertex(T.LABEL, "person", "name", "josh",
"age", 32, "city", "Beijing");
- Vertex ripple = graph.addVertex(T.label, "software", "name", "ripple",
+ Vertex ripple = graph.addVertex(T.LABEL, "software", "name", "ripple",
"lang", "java", "price", 199);
- Vertex peter = graph.addVertex(T.label, "person", "name", "peter",
+ Vertex peter = graph.addVertex(T.LABEL, "person", "name", "peter",
"age", 35, "city", "Shanghai");
marko.addEdge("knows", vadas, "date", "2016-01-10", "weight", 0.5);
@@ -1178,11 +1180,11 @@
import java.util.ArrayList;
import java.util.List;
-import com.baidu.hugegraph.driver.GraphManager;
-import com.baidu.hugegraph.driver.HugeClient;
-import com.baidu.hugegraph.driver.SchemaManager;
-import com.baidu.hugegraph.structure.graph.Edge;
-import com.baidu.hugegraph.structure.graph.Vertex;
+import org.apache.hugegraph.driver.GraphManager;
+import org.apache.hugegraph.driver.HugeClient;
+import org.apache.hugegraph.driver.SchemaManager;
+import org.apache.hugegraph.structure.graph.Edge;
+import org.apache.hugegraph.structure.graph.Vertex;
public class BatchExample {
@@ -1318,8 +1320,8 @@
mvn clean package -DskipTests
You can use
-c
parameter specify the configuration file, more computer config please see:Computer Config Options
cd hugegraph-computer-${version}
bin/start-computer.sh -d local -r master
-
bin/start-computer.sh -d local -r worker
-
2.5.1 Enable OLAP
index query for server
If OLAP index is not enabled, it needs to enable, more reference: modify-graphs-read-mode
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
+
bin/start-computer.sh -d local -r worker
+
2.5.1 Enable OLAP
index query for server
If OLAP index is not enabled, it needs to enable, more reference: modify-graphs-read-mode
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
"ALL"
2.5.2 Query page_rank
property value:
curl "http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3" | gunzip
@@ -1373,7 +1375,7 @@
# NOTE: diagnostic log exist only when the job fails, and it will only be saved for one hour.
kubectl get event --field-selector reason=ComputerJobFailed --field-selector involvedObject.name=pagerank-sample -n hugegraph-computer-system
NOTE: it will only be saved for one hour
kubectl get event --field-selector reason=ComputerJobSucceed --field-selector involvedObject.name=pagerank-sample -n hugegraph-computer-system
-
If the output to Hugegraph-Server
is consistent with Locally, if output to HDFS
, please check the result file in the directory of /hugegraph-computer/results/{jobId}
directory.
More algorithms please see: Built-In algorithms
TODO
TODO
配置文件的目录为 hugegraph-release/conf,所有关于服务和图本身的配置都在此目录下。
主要的配置文件包括:gremlin-server.yaml、rest-server.properties 和 hugegraph.properties
HugeGraphServer 内部集成了 GremlinServer 和 RestServer,而 gremlin-server.yaml 和 rest-server.properties 就是用来配置这两个Server的。
下面对这三个配置文件逐一介绍。
gremlin-server.yaml 文件默认的内容如下:
# host and port of gremlin server, need to be consistent with host and port in rest-server.properties
+
If the output to Hugegraph-Server
is consistent with Locally, if output to HDFS
, please check the result file in the directory of /hugegraph-computer/results/{jobId}
directory.
More algorithms please see: Built-In algorithms
TODO
TODO
The directory for the configuration files is hugegraph-release/conf
, and all the configurations related to the service and the graph itself are located in this directory.
The main configuration files include gremlin-server.yaml
, rest-server.properties
, and hugegraph.properties
.
The HugeGraphServer
integrates the GremlinServer
and RestServer
internally, and gremlin-server.yaml
and rest-server.properties
are used to configure these two servers.
Now let’s introduce these three configuration files one by one.
The default content of the gremlin-server.yaml
file is as follows:
# host and port of gremlin server, need to be consistent with host and port in rest-server.properties
#host: 127.0.0.1
#port: 8182
@@ -1460,202 +1462,199 @@
ssl: {
enabled: false
}
-
上面的配置项很多,但目前只需要关注如下几个配置项:channelizer 和 graphs。
默认GremlinServer是服务在 localhost:8182,如果需要修改,配置 host、port 即可
同时需要在 rest-server.properties 中增加对应的配置项 gremlinserver.url=http://host:port
rest-server.properties 文件的默认内容如下:
# bind url
-restserver.url=http://127.0.0.1:8080
-# gremlin server url, need to be consistent with host and port in gremlin-server.yaml
-#gremlinserver.url=http://127.0.0.1:8182
-
-# graphs list with pair NAME:CONF_PATH
-graphs=[hugegraph:conf/hugegraph.properties]
-
-# authentication
-#auth.authenticator=
-#auth.admin_token=
-#auth.user_tokens=[]
-
-server.id=server-1
-server.role=master
-
注意:gremlin-server.yaml 和 rest-server.properties 都包含 graphs 配置项,而
init-store
命令是根据 gremlin-server.yaml 的 graphs 下的图进行初始化的。
配置项 gremlinserver.url 是 GremlinServer 为 RestServer 提供服务的 url,该配置项默认为 http://localhost:8182,如需修改,需要和 gremlin-server.yaml 中的 host 和 port 相匹配;
hugegraph.properties 是一类文件,因为如果系统存在多个图,则会有多个相似的文件。该文件用来配置与图存储和查询相关的参数,文件的默认内容如下:
# gremlin entrence to create graph
-gremlin.graph=com.baidu.hugegraph.HugeFactory
-
-# cache config
-#schema.cache_capacity=100000
-# vertex-cache default is 1000w, 10min expired
-#vertex.cache_capacity=10000000
-#vertex.cache_expire=600
-# edge-cache default is 100w, 10min expired
-#edge.cache_capacity=1000000
-#edge.cache_expire=600
-
-# schema illegal name template
-#schema.illegal_name_regex=\s+|~.*
-
-#vertex.default_label=vertex
-
-backend=rocksdb
-serializer=binary
-
-store=hugegraph
-
-raft.mode=false
-raft.safe_read=false
-raft.use_snapshot=false
-raft.endpoint=127.0.0.1:8281
-raft.group_peers=127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283
-raft.path=./raft-log
-raft.use_replicator_pipeline=true
-raft.election_timeout=10000
-raft.snapshot_interval=3600
-raft.backend_threads=48
-raft.read_index_threads=8
-raft.queue_size=16384
-raft.queue_publish_timeout=60
-raft.apply_batch=1
-raft.rpc_threads=80
-raft.rpc_connect_timeout=5000
-raft.rpc_timeout=60000
-
-# if use 'ikanalyzer', need download jar from 'https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory
-search.text_analyzer=jieba
-search.text_analyzer_mode=INDEX
-
-# rocksdb backend config
-#rocksdb.data_path=/path/to/disk
-#rocksdb.wal_path=/path/to/disk
-
-# cassandra backend config
-cassandra.host=localhost
-cassandra.port=9042
-cassandra.username=
-cassandra.password=
-#cassandra.connect_timeout=5
-#cassandra.read_timeout=20
-#cassandra.keyspace.strategy=SimpleStrategy
-#cassandra.keyspace.replication=3
-
-# hbase backend config
-#hbase.hosts=localhost
-#hbase.port=2181
-#hbase.znode_parent=/hbase
-#hbase.threads_max=64
-
-# mysql backend config
-#jdbc.driver=com.mysql.jdbc.Driver
-#jdbc.url=jdbc:mysql://127.0.0.1:3306
-#jdbc.username=root
-#jdbc.password=
-#jdbc.reconnect_max_times=3
-#jdbc.reconnect_interval=3
-#jdbc.sslmode=false
-
-# postgresql & cockroachdb backend config
-#jdbc.driver=org.postgresql.Driver
-#jdbc.url=jdbc:postgresql://localhost:5432/
-#jdbc.username=postgres
-#jdbc.password=
-
-# palo backend config
-#palo.host=127.0.0.1
-#palo.poll_interval=10
-#palo.temp_dir=./palo-data
-#palo.file_limit_size=32
-
重点关注未注释的几项:
我们的系统是可以存在多个图的,并且各个图的后端可以不一样,比如图 hugegraph 和 hugegraph1,其中 hugegraph 以 cassandra 作为后端,hugegraph1 以 rocksdb作为后端。
配置方法也很简单:
修改 gremlin-server.yaml
在 gremlin-server.yaml 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs: {
+
There are many configuration options mentioned above, but for now, let’s focus on the following options: channelizer
and graphs
.
graphs
: This option specifies the graphs that need to be opened when the GremlinServer starts. It is a map structure where the key is the name of the graph and the value is the configuration file path for that graph.channelizer
: The GremlinServer supports two communication modes with clients: WebSocket and HTTP (default). If WebSocket is chosen, users can quickly experience the features of HugeGraph using Gremlin-Console, but it does not support importing large-scale data. It is recommended to use HTTP for communication, as all peripheral components of HugeGraph are implemented based on HTTP.By default, the GremlinServer serves at localhost:8182
. If you need to modify it, configure the host
and port
settings.
host
: The hostname or IP address of the machine where the GremlinServer is deployed. Currently, HugeGraphServer does not support distributed deployment, and GremlinServer is not directly exposed to users.port
: The port number of the machine where the GremlinServer is deployed.Additionally, you need to add the corresponding configuration gremlinserver.url=http://host:port
in rest-server.properties
.
The default content of the rest-server.properties
file is as follows:
# bind url
+restserver.url=http://127.0.0.1:8080
+# gremlin server url, need to be consistent with host and port in gremlin-server.yaml
+#gremlinserver.url=http://127.0.0.1:8182
+
+# graphs list with pair NAME:CONF_PATH
+graphs=[hugegraph:conf/hugegraph.properties]
+
+# authentication
+#auth.authenticator=
+#auth.admin_token=
+#auth.user_tokens=[]
+
+server.id=server-1
+server.role=master
+
restserver.url
: The URL at which the RestServer provides its services. Modify it according to the actual environment.graphs
: The RestServer also needs to open graphs when it starts. This option is a map structure where the key is the name of the graph and the value is the configuration file path for that graph.Note: Both
gremlin-server.yaml
andrest-server.properties
contain thegraphs
configuration option, and theinit-store
command initializes based on the graphs specified in thegraphs
section ofgremlin-server.yaml
.
The
gremlinserver.url
configuration option is the URL at which the GremlinServer provides services to the RestServer. By default, it is set tohttp://localhost:8182
. If you need to modify it, it should match thehost
andport
settings ingremlin-server.yaml
.
hugegraph.properties
is a type of file. If the system has multiple graphs, there will be multiple similar files. This file is used to configure parameters related to graph storage and querying. The default content of the file is as follows:
# gremlin entrence to create graph
+gremlin.graph=com.baidu.hugegraph.HugeFactory
+
+# cache config
+#schema.cache_capacity=100000
+# vertex-cache default is 1000w, 10min expired
+#vertex.cache_capacity=10000000
+#vertex.cache_expire=600
+# edge-cache default is 100w, 10min expired
+#edge.cache_capacity=1000000
+#edge.cache_expire=600
+
+# schema illegal name template
+#schema.illegal_name_regex=\s+|~.*
+
+#vertex.default_label=vertex
+
+backend=rocksdb
+serializer=binary
+
+store=hugegraph
+
+raft.mode=false
+raft.safe_read=false
+raft.use_snapshot=false
+raft.endpoint=127.0.0.1:8281
+raft.group_peers=127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283
+raft.path=./raft-log
+raft.use_replicator_pipeline=true
+raft.election_timeout=10000
+raft.snapshot_interval=3600
+raft.backend_threads=48
+raft.read_index_threads=8
+raft.queue_size=16384
+raft.queue_publish_timeout=60
+raft.apply_batch=1
+raft.rpc_threads=80
+raft.rpc_connect_timeout=5000
+raft.rpc_timeout=60000
+
+# if use 'ikanalyzer', need download jar from 'https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory
+search.text_analyzer=jieba
+search.text_analyzer_mode=INDEX
+
+# rocksdb backend config
+#rocksdb.data_path=/path/to/disk
+#rocksdb.wal_path=/path/to/disk
+
+# cassandra backend config
+cassandra.host=localhost
+cassandra.port=9042
+cassandra.username=
+cassandra.password=
+#cassandra.connect_timeout=5
+#cassandra.read_timeout=20
+#cassandra.keyspace.strategy=SimpleStrategy
+#cassandra.keyspace.replication=3
+
+# hbase backend config
+#hbase.hosts=localhost
+#hbase.port=2181
+#hbase.znode_parent=/hbase
+#hbase.threads_max=64
+
+# mysql backend config
+#jdbc.driver=com.mysql.jdbc.Driver
+#jdbc.url=jdbc:mysql://127.0.0.1:3306
+#jdbc.username=root
+#jdbc.password=
+#jdbc.reconnect_max_times=3
+#jdbc.reconnect_interval=3
+#jdbc.sslmode=false
+
+# postgresql & cockroachdb backend config
+#jdbc.driver=org.postgresql.Driver
+#jdbc.url=jdbc:postgresql://localhost:5432/
+#jdbc.username=postgres
+#jdbc.password=
+
+# palo backend config
+#palo.host=127.0.0.1
+#palo.poll_interval=10
+#palo.temp_dir=./palo-data
+#palo.file_limit_size=32
+
Pay attention to the following uncommented items:
gremlin.graph
: The entry point for GremlinServer startup. Users should not modify this item.backend
: The backend storage used, with options including memory
, cassandra
, scylladb
, mysql
, hbase
, postgresql
, and rocksdb
.serializer
: Mainly for internal use, used to serialize schema, vertices, and edges to the backend. The corresponding options are text
, cassandra
, scylladb
, and binary
(Note: The rocksdb
backend should have a value of binary
, while for other backends, the values of backend
and serializer
should remain consistent. For example, for the hbase
backend, the value should be hbase
).store
: The name of the database used for storing the graph in the backend. In Cassandra and ScyllaDB, it corresponds to the keyspace name. The value of this item is unrelated to the graph name in GremlinServer and RestServer, but for clarity, it is recommended to use the same name.cassandra.host
: This item is only meaningful when the backend is set to cassandra
or scylladb
. It specifies the seeds of the Cassandra/ScyllaDB cluster.cassandra.port
: This item is only meaningful when the backend is set to cassandra
or scylladb
. It specifies the native port of the Cassandra/ScyllaDB cluster.rocksdb.data_path
: This item is only meaningful when the backend is set to rocksdb
. It specifies the data directory for RocksDB.rocksdb.wal_path
: This item is only meaningful when the backend is set to rocksdb
. It specifies the log directory for RocksDB.admin.token
: A token used to retrieve server configuration information. For example: http://localhost:8080/graphs/hugegraph/conf?token=162f7848-0b6d-4faf-b557-3a0797869c55Our system can have multiple graphs, and each graph can have a different backend. For example, there are two graphs named hugegraph
and hugegraph1
, where hugegraph
uses Cassandra as the backend and hugegraph1
uses RocksDB as the backend.
The configuration method is simple:
Modify gremlin-server.yaml
Add a key-value pair in the graphs
section of gremlin-server.yaml
, where the key is the name of the graph and the value is the path to the graph’s configuration file. For example:
graphs: {
hugegraph: conf/hugegraph.properties,
hugegraph1: conf/hugegraph1.properties
}
-
修改 rest-server.properties
在 rest-server.properties 的 graphs 域中添加一个键值对,键为图的名字,值为图的配置文件路径,比如:
graphs=[hugegraph:conf/hugegraph.properties, hugegraph1:conf/hugegraph1.properties]
-
添加 hugegraph1.properties
拷贝 hugegraph.properties,命名为 hugegraph1.properties,修改图对应的数据库名以及关于后端部分的参数,比如:
store=hugegraph1
-
-...
-
-backend=rocksdb
-serializer=binary
-
停止 Server,初始化执行 init-store.sh(为新的图创建数据库),重新启动 Server
$ bin/stop-hugegraph.sh
+
Modify rest-server.properties
Add a key-value pair in the graphs
section of rest-server.properties
, where the key is the name of the graph and the value is the path to the graph’s configuration file. For example:
graphs=[hugegraph:conf/hugegraph.properties, hugegraph1:conf/hugegraph1.properties]
+
Add hugegraph1.properties
Copy hugegraph.properties
and name it hugegraph1.properties
. Modify the database name corresponding to the graph and the parameters related to the backend. For example:
store=hugegraph1
+
+...
+
+backend=rocksdb
+serializer=binary
+
Stop the server, execute init-store.sh
(to create a new database for the new graph), and restart the server.
$ bin/stop-hugegraph.sh
$ bin/init-store.sh
$ bin/start-hugegraph.sh
-
Corresponding configuration file gremlin-server.yaml
config option | default value | description |
---|---|---|
host | 127.0.0.1 | The host or ip of Gremlin Server. |
port | 8182 | The listening port of Gremlin Server. |
graphs | hugegraph: conf/hugegraph.properties | The map of graphs with name and config file path. |
scriptEvaluationTimeout | 30000 | The timeout for gremlin script execution(millisecond). |
channelizer | org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer | Indicates the protocol which the Gremlin Server provides service. |
authentication | authenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties} | The authenticator and config(contains tokens path) of authentication mechanism. |
Corresponding configuration file rest-server.properties
config option | default value | description |
---|---|---|
graphs | [hugegraph:conf/hugegraph.properties] | The map of graphs’ name and config file. |
server.id | server-1 | The id of rest server, used for license verification. |
server.role | master | The role of nodes in the cluster, available types are [master, worker, computer] |
restserver.url | http://127.0.0.1:8080 | The url for listening of rest server. |
ssl.keystore_file | server.keystore | The path of server keystore file used when https protocol is enabled. |
ssl.keystore_password | The password of the path of the server keystore file used when the https protocol is enabled. | |
restserver.max_worker_threads | 2 * CPUs | The maximum worker threads of rest server. |
restserver.min_free_memory | 64 | The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value. |
restserver.request_timeout | 30 | The time in seconds within which a request must complete, -1 means no timeout. |
restserver.connection_idle_timeout | 30 | The time in seconds to keep an inactive connection alive, -1 means no timeout. |
restserver.connection_max_requests | 256 | The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited. |
gremlinserver.url | http://127.0.0.1:8182 | The url of gremlin server. |
gremlinserver.max_route | 8 | The max route number for gremlin server. |
gremlinserver.timeout | 30 | The timeout in seconds of waiting for gremlin server. |
batch.max_edges_per_batch | 500 | The maximum number of edges submitted per batch. |
batch.max_vertices_per_batch | 500 | The maximum number of vertices submitted per batch. |
batch.max_write_ratio | 50 | The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0. |
batch.max_write_threads | 0 | The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads. |
auth.authenticator | The class path of authenticator implementation. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator. | |
auth.admin_token | 162f7848-0b6d-4faf-b557-3a0797869c55 | Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator. |
auth.graph_store | hugegraph | The name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator. |
auth.user_tokens | [hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31] | The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator. |
auth.audit_log_rate | 1000.0 | The max rate of audit log output per user, default value is 1000 records per second. |
auth.cache_capacity | 10240 | The max cache capacity of each auth cache item. |
auth.cache_expire | 600 | The expiration time in seconds of vertex cache. |
auth.remote_url | If the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ‘,’. | |
auth.token_expire | 86400 | The expiration time in seconds after token created |
auth.token_secret | FXQXbJtbCLxODc6tGci732pkH1cyf8Qg | Secret key of HS256 algorithm. |
exception.allow_trace | false | Whether to allow exception trace stack. |
Basic Config Options and Backend Config Options correspond to configuration files:{graph-name}.properties,such as hugegraph.properties
config option | default value | description |
---|---|---|
gremlin.graph | com.baidu.hugegraph.HugeFactory | Gremlin entrance to create graph. |
backend | rocksdb | The data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql]. |
serializer | binary | The serializer for backend store, available values are [text, binary, cassandra, hbase, mysql]. |
store | hugegraph | The database name like Cassandra Keyspace. |
store.connection_detect_interval | 600 | The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time. |
store.graph | g | The graph table name, which store vertex, edge and property. |
store.schema | m | The schema table name, which store meta data. |
store.system | s | The system table name, which store system data. |
schema.illegal_name_regex | .\s+$|~. | The regex specified the illegal format for schema name. |
schema.cache_capacity | 10000 | The max cache size(items) of schema cache. |
vertex.cache_type | l2 | The type of vertex cache, allowed values are [l1, l2]. |
vertex.cache_capacity | 10000000 | The max cache size(items) of vertex cache. |
vertex.cache_expire | 600 | The expire time in seconds of vertex cache. |
vertex.check_customized_id_exist | false | Whether to check the vertices exist for those using customized id strategy. |
vertex.default_label | vertex | The default vertex label. |
vertex.tx_capacity | 10000 | The max size(items) of vertices(uncommitted) in transaction. |
vertex.check_adjacent_vertex_exist | false | Whether to check the adjacent vertices of edges exist. |
vertex.lazy_load_adjacent_vertex | true | Whether to lazy load adjacent vertices of edges. |
vertex.part_edge_commit_size | 5000 | Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled. |
vertex.encode_primary_key_number | true | Whether to encode number value of primary key in vertex id. |
vertex.remove_left_index_at_overwrite | false | Whether remove left index at overwrite. |
edge.cache_type | l2 | The type of edge cache, allowed values are [l1, l2]. |
edge.cache_capacity | 1000000 | The max cache size(items) of edge cache. |
edge.cache_expire | 600 | The expiration time in seconds of edge cache. |
edge.tx_capacity | 10000 | The max size(items) of edges(uncommitted) in transaction. |
query.page_size | 500 | The size of each page when querying by paging. |
query.batch_size | 1000 | The size of each batch when querying by batch. |
query.ignore_invalid_data | true | Whether to ignore invalid data of vertex or edge. |
query.index_intersect_threshold | 1000 | The maximum number of intermediate results to intersect indexes when querying by multiple single index properties. |
query.ramtable_edges_capacity | 20000000 | The maximum number of edges in ramtable, include OUT and IN edges. |
query.ramtable_enable | false | Whether to enable ramtable for query of adjacent edges. |
query.ramtable_vertices_capacity | 10000000 | The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity. |
query.optimize_aggregate_by_index | false | Whether to optimize aggregate query(like count) by index. |
oltp.concurrent_depth | 10 | The min depth to enable concurrent oltp algorithm. |
oltp.concurrent_threads | 10 | Thread number to concurrently execute oltp algorithm. |
oltp.collection_type | EC | The implementation type of collections used in oltp algorithm. |
rate_limit.read | 0 | The max rate(times/s) to execute query of vertices/edges. |
rate_limit.write | 0 | The max rate(items/s) to add/update/delete vertices/edges. |
task.wait_timeout | 10 | Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend. |
task.input_size_limit | 16777216 | The job input size limit in bytes. |
task.result_size_limit | 16777216 | The job result size limit in bytes. |
task.sync_deletion | false | Whether to delete schema or expired data synchronously. |
task.ttl_delete_batch | 1 | The batch size used to delete expired data. |
computer.config | /conf/computer.yaml | The config file path of computer job. |
search.text_analyzer | ikanalyzer | Choose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer]. if use ‘ikanalyzer’, need download jar from ‘https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory |
search.text_analyzer_mode | smart | Specify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}. |
snowflake.datacenter_id | 0 | The datacenter id of snowflake id generator. |
snowflake.force_string | false | Whether to force the snowflake long id to be a string. |
snowflake.worker_id | 0 | The worker id of snowflake id generator. |
raft.mode | false | Whether the backend storage works in raft mode. |
raft.safe_read | false | Whether to use linearly consistent read. |
raft.use_snapshot | false | Whether to use snapshot. |
raft.endpoint | 127.0.0.1:8281 | The peerid of current raft node. |
raft.group_peers | 127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283 | The peers of current raft group. |
raft.path | ./raft-log | The log path of current raft node. |
raft.use_replicator_pipeline | true | Whether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn’t have to wait for the ack message of the current log to be sent. |
raft.election_timeout | 10000 | Timeout in milliseconds to launch a round of election. |
raft.snapshot_interval | 3600 | The interval in seconds to trigger snapshot save. |
raft.backend_threads | current CPU v-cores | The thread number used to apply task to backend. |
raft.read_index_threads | 8 | The thread number used to execute reading index. |
raft.apply_batch | 1 | The apply batch size to trigger disruptor event handler. |
raft.queue_size | 16384 | The disruptor buffers size for jraft RaftNode, StateMachine and LogManager. |
raft.queue_publish_timeout | 60 | The timeout in second when publish event into disruptor. |
raft.rpc_threads | 80 | The rpc threads for jraft RPC layer. |
raft.rpc_connect_timeout | 5000 | The rpc connect timeout for jraft rpc. |
raft.rpc_timeout | 60000 | The rpc timeout for jraft rpc. |
raft.rpc_buf_low_water_mark | 10485760 | The ChannelOutboundBuffer’s low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network. |
raft.rpc_buf_high_water_mark | 20971520 | The ChannelOutboundBuffer’s high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time. |
raft.read_strategy | ReadOnlyLeaseBased | The linearizability of read strategy. |
config option | default value | description |
---|---|---|
rpc.client_connect_timeout | 20 | The timeout(in seconds) of rpc client connect to rpc server. |
rpc.client_load_balancer | consistentHash | The rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is ‘consistentHash’, means forwarding by request parameters. |
rpc.client_read_timeout | 40 | The timeout(in seconds) of rpc client read from rpc server. |
rpc.client_reconnect_period | 10 | The period(in seconds) of rpc client reconnect to rpc server. |
rpc.client_retries | 3 | Failed retry number of rpc client calls to rpc server. |
rpc.config_order | 999 | Sofa rpc configuration file loading order, the larger the more later loading. |
rpc.logger_impl | com.alipay.sofa.rpc.log.SLF4JLoggerImpl | Sofa rpc log implementation class. |
rpc.protocol | bolt | Rpc communication protocol, client and server need to be specified the same value. |
rpc.remote_url | The remote urls of rpc peers, it can be set to multiple addresses, which are concat by ‘,’, empty value means not enabled. | |
rpc.server_adaptive_port | false | Whether the bound port is adaptive, if it’s enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts. |
rpc.server_host | The hosts/ips bound by rpc server to provide services, empty value means not enabled. | |
rpc.server_port | 8090 | The port bound by rpc server to provide services. |
rpc.server_timeout | 30 | The timeout(in seconds) of rpc server execution. |
config option | default value | description |
---|---|---|
backend | Must be set to cassandra . | |
serializer | Must be set to cassandra . | |
cassandra.host | localhost | The seeds hostname or ip address of cassandra cluster. |
cassandra.port | 9042 | The seeds port address of cassandra cluster. |
cassandra.connect_timeout | 5 | The cassandra driver connect server timeout(seconds). |
cassandra.read_timeout | 20 | The cassandra driver read from server timeout(seconds). |
cassandra.keyspace.strategy | SimpleStrategy | The replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy. |
cassandra.keyspace.replication | [3] | The keyspace replication factor of SimpleStrategy, like ‘[3]’.Or replicas in each datacenter of NetworkTopologyStrategy, like ‘[dc1:2,dc2:1]’. |
cassandra.username | The username to use to login to cassandra cluster. | |
cassandra.password | The password corresponding to cassandra.username. | |
cassandra.compression_type | none | The compression algorithm of cassandra transport: none/snappy/lz4. |
cassandra.jmx_port=7199 | 7199 | The port of JMX API service for cassandra. |
cassandra.aggregation_timeout | 43200 | The timeout in seconds of waiting for aggregation. |
config option | default value | description |
---|---|---|
backend | Must be set to scylladb . | |
serializer | Must be set to scylladb . |
Other options are consistent with the Cassandra backend.
config option | default value | description |
---|---|---|
backend | Must be set to rocksdb . | |
serializer | Must be set to binary . | |
rocksdb.data_disks | [] | The optimized disks for storing data of RocksDB. The format of each element: STORE/TABLE: /path/disk .Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap] |
rocksdb.data_path | rocksdb-data | The path for storing data of RocksDB. |
rocksdb.wal_path | rocksdb-data | The path for storing WAL of RocksDB. |
rocksdb.allow_mmap_reads | false | Allow the OS to mmap file for reading sst tables. |
rocksdb.allow_mmap_writes | false | Allow the OS to mmap file for writing. |
rocksdb.block_cache_capacity | 8388608 | The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache. |
rocksdb.bloom_filter_bits_per_key | -1 | The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter. |
rocksdb.bloom_filter_block_based_mode | false | Use block based filter rather than full filter. |
rocksdb.bloom_filter_whole_key_filtering | true | True if place whole keys in the bloom filter, else place the prefix of keys. |
rocksdb.bottommost_compression | NO_COMPRESSION | The compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd. |
rocksdb.bulkload_mode | false | Switch to the mode to bulk load data into RocksDB. |
rocksdb.cache_index_and_filter_blocks | false | Indicating if we’d put index/filter blocks to the block cache. |
rocksdb.compaction_style | LEVEL | Set compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO. |
rocksdb.compression | SNAPPY_COMPRESSION | The compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd. |
rocksdb.compression_per_level | [NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION] | The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd. |
rocksdb.delayed_write_rate | 16777216 | The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind. |
rocksdb.log_level | INFO | The info log level of RocksDB. |
rocksdb.max_background_jobs | 8 | Maximum number of concurrent background jobs, including flushes and compactions. |
rocksdb.level_compaction_dynamic_level_bytes | false | Whether to enable level_compaction_dynamic_level_bytes, if it’s enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it’s not recommended. |
rocksdb.max_bytes_for_level_base | 536870912 | The upper-bound of the total size of level-1 files in bytes. |
rocksdb.max_bytes_for_level_multiplier | 10.0 | The ratio between the total size of level (L+1) files and the total size of level L files for all L. |
rocksdb.max_open_files | -1 | The maximum number of open files that can be cached by RocksDB, -1 means no limit. |
rocksdb.max_subcompactions | 4 | The value represents the maximum number of threads per compaction job. |
rocksdb.max_write_buffer_number | 6 | The maximum number of write buffers that are built up in memory. |
rocksdb.max_write_buffer_number_to_maintain | 0 | The total maximum number of write buffers to maintain in memory. |
rocksdb.min_write_buffer_number_to_merge | 2 | The minimum number of write buffers that will be merged together. |
rocksdb.num_levels | 7 | Set the number of levels for this database. |
rocksdb.optimize_filters_for_hits | false | This flag allows us to not store filters for the last level. |
rocksdb.optimize_mode | true | Optimize for heavy workloads and big datasets. |
rocksdb.pin_l0_filter_and_index_blocks_in_cache | false | Indicating if we’d put index/filter blocks to the block cache. |
rocksdb.sst_path | The path for ingesting SST file into RocksDB. | |
rocksdb.target_file_size_base | 67108864 | The target file size for compaction in bytes. |
rocksdb.target_file_size_multiplier | 1 | The size ratio between a level L file and a level (L+1) file. |
rocksdb.use_direct_io_for_flush_and_compaction | false | Enable the OS to use direct read/writes in flush and compaction. |
rocksdb.use_direct_reads | false | Enable the OS to use direct I/O for reading sst tables. |
rocksdb.write_buffer_size | 134217728 | Amount of data in bytes to build up in memory. |
rocksdb.max_manifest_file_size | 104857600 | The max size of manifest file in bytes. |
rocksdb.skip_stats_update_on_db_open | false | Whether to skip statistics update when opening the database, setting this flag true allows us to not update statistics. |
rocksdb.max_file_opening_threads | 16 | The max number of threads used to open files. |
rocksdb.max_total_wal_size | 0 | Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit. |
rocksdb.db_write_buffer_size | 0 | Total size of write buffers in bytes across all column families, 0 means no limit. |
rocksdb.delete_obsolete_files_period | 21600 | The periodicity in seconds when obsolete files get deleted, 0 means always do full purge. |
rocksdb.hard_pending_compaction_bytes_limit | 274877906944 | The hard limit to impose on pending compaction in bytes. |
rocksdb.level0_file_num_compaction_trigger | 2 | Number of files to trigger level-0 compaction. |
rocksdb.level0_slowdown_writes_trigger | 20 | Soft limit on number of level-0 files for slowing down writes. |
rocksdb.level0_stop_writes_trigger | 36 | Hard limit on number of level-0 files for stopping writes. |
rocksdb.soft_pending_compaction_bytes_limit | 68719476736 | The soft limit to impose on pending compaction in bytes. |
config option | default value | description |
---|---|---|
backend | Must be set to hbase . | |
serializer | Must be set to hbase . | |
hbase.hosts | localhost | The hostnames or ip addresses of HBase zookeeper, separated with commas. |
hbase.port | 2181 | The port address of HBase zookeeper. |
hbase.threads_max | 64 | The max threads num of hbase connections. |
hbase.znode_parent | /hbase | The znode parent path of HBase zookeeper. |
hbase.zk_retry | 3 | The recovery retry times of HBase zookeeper. |
hbase.aggregation_timeout | 43200 | The timeout in seconds of waiting for aggregation. |
hbase.kerberos_enable | false | Is Kerberos authentication enabled for HBase. |
hbase.kerberos_keytab | The HBase’s key tab file for kerberos authentication. | |
hbase.kerberos_principal | The HBase’s principal for kerberos authentication. | |
hbase.krb5_conf | etc/krb5.conf | Kerberos configuration file, including KDC IP, default realm, etc. |
hbase.hbase_site | /etc/hbase/conf/hbase-site.xml | The HBase’s configuration file |
hbase.enable_partition | true | Is pre-split partitions enabled for HBase. |
hbase.vertex_partitions | 10 | The number of partitions of the HBase vertex table. |
hbase.edge_partitions | 30 | The number of partitions of the HBase edge table. |
config option | default value | description |
---|---|---|
backend | Must be set to mysql . | |
serializer | Must be set to mysql . | |
jdbc.driver | com.mysql.jdbc.Driver | The JDBC driver class to connect database. |
jdbc.url | jdbc:mysql://127.0.0.1:3306 | The url of database in JDBC format. |
jdbc.username | root | The username to login database. |
jdbc.password | ****** | The password corresponding to jdbc.username. |
jdbc.ssl_mode | false | The SSL mode of connections with database. |
jdbc.reconnect_interval | 3 | The interval(seconds) between reconnections when the database connection fails. |
jdbc.reconnect_max_times | 3 | The reconnect times when the database connection fails. |
jdbc.storage_engine | InnoDB | The storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL. |
jdbc.postgresql.connect_database | template1 | The database used to connect when init store, drop store or check store exist. |
config option | default value | description |
---|---|---|
backend | Must be set to postgresql . | |
serializer | Must be set to postgresql . |
Other options are consistent with the MySQL backend.
The driver and url of the PostgreSQL backend should be set to:
jdbc.driver=org.postgresql.Driver
jdbc.url=jdbc:postgresql://localhost:5432/
HugeGraph 为了方便不同用户场景下的鉴权使用,目前内置了两套权限模式:
ConfigAuthenticator
模式,通过本地配置文件存储用户名和密码 (仅支持单 GraphServer)StandardAuthenticator
模式,支持多用户认证、以及细粒度的权限访问控制,采用基于 “用户-用户组-操作-资源” 的 4 层设计,灵活控制用户角色与权限 (支持多 GraphServer)其中 StandardAuthenticator
模式的几个核心设计:
admin
) 用户,后续通过超级管理员创建其它用户,新创建的用户被分配足够权限后,可以创建或管理更多的用户type
、label
、properties
三个要素,共有 18 种类型、任意 label、任意 properties 可组合形成的资源,一个资源的内部条件是且关系,多个资源之间的条件是或关系举例说明:
// 场景:某用户只有北京地区的数据读取权限
+
Corresponding configuration file gremlin-server.yaml
config option | default value | description |
---|---|---|
host | 127.0.0.1 | The host or ip of Gremlin Server. |
port | 8182 | The listening port of Gremlin Server. |
graphs | hugegraph: conf/hugegraph.properties | The map of graphs with name and config file path. |
scriptEvaluationTimeout | 30000 | The timeout for gremlin script execution(millisecond). |
channelizer | org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer | Indicates the protocol which the Gremlin Server provides service. |
authentication | authenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties} | The authenticator and config(contains tokens path) of authentication mechanism. |
Corresponding configuration file rest-server.properties
config option | default value | description |
---|---|---|
graphs | [hugegraph:conf/hugegraph.properties] | The map of graphs’ name and config file. |
server.id | server-1 | The id of rest server, used for license verification. |
server.role | master | The role of nodes in the cluster, available types are [master, worker, computer] |
restserver.url | http://127.0.0.1:8080 | The url for listening of rest server. |
ssl.keystore_file | server.keystore | The path of server keystore file used when https protocol is enabled. |
ssl.keystore_password | The password of the path of the server keystore file used when the https protocol is enabled. | |
restserver.max_worker_threads | 2 * CPUs | The maximum worker threads of rest server. |
restserver.min_free_memory | 64 | The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value. |
restserver.request_timeout | 30 | The time in seconds within which a request must complete, -1 means no timeout. |
restserver.connection_idle_timeout | 30 | The time in seconds to keep an inactive connection alive, -1 means no timeout. |
restserver.connection_max_requests | 256 | The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited. |
gremlinserver.url | http://127.0.0.1:8182 | The url of gremlin server. |
gremlinserver.max_route | 8 | The max route number for gremlin server. |
gremlinserver.timeout | 30 | The timeout in seconds of waiting for gremlin server. |
batch.max_edges_per_batch | 500 | The maximum number of edges submitted per batch. |
batch.max_vertices_per_batch | 500 | The maximum number of vertices submitted per batch. |
batch.max_write_ratio | 50 | The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0. |
batch.max_write_threads | 0 | The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads. |
auth.authenticator | The class path of authenticator implementation. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator. | |
auth.admin_token | 162f7848-0b6d-4faf-b557-3a0797869c55 | Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator. |
auth.graph_store | hugegraph | The name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator. |
auth.user_tokens | [hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31] | The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator. |
auth.audit_log_rate | 1000.0 | The max rate of audit log output per user, default value is 1000 records per second. |
auth.cache_capacity | 10240 | The max cache capacity of each auth cache item. |
auth.cache_expire | 600 | The expiration time in seconds of vertex cache. |
auth.remote_url | If the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ‘,’. | |
auth.token_expire | 86400 | The expiration time in seconds after token created |
auth.token_secret | FXQXbJtbCLxODc6tGci732pkH1cyf8Qg | Secret key of HS256 algorithm. |
exception.allow_trace | false | Whether to allow exception trace stack. |
Basic Config Options and Backend Config Options correspond to configuration files:{graph-name}.properties,such as hugegraph.properties
config option | default value | description |
---|---|---|
gremlin.graph | com.baidu.hugegraph.HugeFactory | Gremlin entrance to create graph. |
backend | rocksdb | The data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql]. |
serializer | binary | The serializer for backend store, available values are [text, binary, cassandra, hbase, mysql]. |
store | hugegraph | The database name like Cassandra Keyspace. |
store.connection_detect_interval | 600 | The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time. |
store.graph | g | The graph table name, which store vertex, edge and property. |
store.schema | m | The schema table name, which store meta data. |
store.system | s | The system table name, which store system data. |
schema.illegal_name_regex | .\s+$|~. | The regex specified the illegal format for schema name. |
schema.cache_capacity | 10000 | The max cache size(items) of schema cache. |
vertex.cache_type | l2 | The type of vertex cache, allowed values are [l1, l2]. |
vertex.cache_capacity | 10000000 | The max cache size(items) of vertex cache. |
vertex.cache_expire | 600 | The expire time in seconds of vertex cache. |
vertex.check_customized_id_exist | false | Whether to check the vertices exist for those using customized id strategy. |
vertex.default_label | vertex | The default vertex label. |
vertex.tx_capacity | 10000 | The max size(items) of vertices(uncommitted) in transaction. |
vertex.check_adjacent_vertex_exist | false | Whether to check the adjacent vertices of edges exist. |
vertex.lazy_load_adjacent_vertex | true | Whether to lazy load adjacent vertices of edges. |
vertex.part_edge_commit_size | 5000 | Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled. |
vertex.encode_primary_key_number | true | Whether to encode number value of primary key in vertex id. |
vertex.remove_left_index_at_overwrite | false | Whether remove left index at overwrite. |
edge.cache_type | l2 | The type of edge cache, allowed values are [l1, l2]. |
edge.cache_capacity | 1000000 | The max cache size(items) of edge cache. |
edge.cache_expire | 600 | The expiration time in seconds of edge cache. |
edge.tx_capacity | 10000 | The max size(items) of edges(uncommitted) in transaction. |
query.page_size | 500 | The size of each page when querying by paging. |
query.batch_size | 1000 | The size of each batch when querying by batch. |
query.ignore_invalid_data | true | Whether to ignore invalid data of vertex or edge. |
query.index_intersect_threshold | 1000 | The maximum number of intermediate results to intersect indexes when querying by multiple single index properties. |
query.ramtable_edges_capacity | 20000000 | The maximum number of edges in ramtable, include OUT and IN edges. |
query.ramtable_enable | false | Whether to enable ramtable for query of adjacent edges. |
query.ramtable_vertices_capacity | 10000000 | The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity. |
query.optimize_aggregate_by_index | false | Whether to optimize aggregate query(like count) by index. |
oltp.concurrent_depth | 10 | The min depth to enable concurrent oltp algorithm. |
oltp.concurrent_threads | 10 | Thread number to concurrently execute oltp algorithm. |
oltp.collection_type | EC | The implementation type of collections used in oltp algorithm. |
rate_limit.read | 0 | The max rate(times/s) to execute query of vertices/edges. |
rate_limit.write | 0 | The max rate(items/s) to add/update/delete vertices/edges. |
task.wait_timeout | 10 | Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend. |
task.input_size_limit | 16777216 | The job input size limit in bytes. |
task.result_size_limit | 16777216 | The job result size limit in bytes. |
task.sync_deletion | false | Whether to delete schema or expired data synchronously. |
task.ttl_delete_batch | 1 | The batch size used to delete expired data. |
computer.config | /conf/computer.yaml | The config file path of computer job. |
search.text_analyzer | ikanalyzer | Choose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer]. if use ‘ikanalyzer’, need download jar from ‘https://github.com/apache/hugegraph-doc/raw/ik_binary/dist/server/ikanalyzer-2012_u6.jar' to lib directory |
search.text_analyzer_mode | smart | Specify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}. |
snowflake.datacenter_id | 0 | The datacenter id of snowflake id generator. |
snowflake.force_string | false | Whether to force the snowflake long id to be a string. |
snowflake.worker_id | 0 | The worker id of snowflake id generator. |
raft.mode | false | Whether the backend storage works in raft mode. |
raft.safe_read | false | Whether to use linearly consistent read. |
raft.use_snapshot | false | Whether to use snapshot. |
raft.endpoint | 127.0.0.1:8281 | The peerid of current raft node. |
raft.group_peers | 127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283 | The peers of current raft group. |
raft.path | ./raft-log | The log path of current raft node. |
raft.use_replicator_pipeline | true | Whether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn’t have to wait for the ack message of the current log to be sent. |
raft.election_timeout | 10000 | Timeout in milliseconds to launch a round of election. |
raft.snapshot_interval | 3600 | The interval in seconds to trigger snapshot save. |
raft.backend_threads | current CPU v-cores | The thread number used to apply task to backend. |
raft.read_index_threads | 8 | The thread number used to execute reading index. |
raft.apply_batch | 1 | The apply batch size to trigger disruptor event handler. |
raft.queue_size | 16384 | The disruptor buffers size for jraft RaftNode, StateMachine and LogManager. |
raft.queue_publish_timeout | 60 | The timeout in second when publish event into disruptor. |
raft.rpc_threads | 80 | The rpc threads for jraft RPC layer. |
raft.rpc_connect_timeout | 5000 | The rpc connect timeout for jraft rpc. |
raft.rpc_timeout | 60000 | The rpc timeout for jraft rpc. |
raft.rpc_buf_low_water_mark | 10485760 | The ChannelOutboundBuffer’s low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network. |
raft.rpc_buf_high_water_mark | 20971520 | The ChannelOutboundBuffer’s high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time. |
raft.read_strategy | ReadOnlyLeaseBased | The linearizability of read strategy. |
config option | default value | description |
---|---|---|
rpc.client_connect_timeout | 20 | The timeout(in seconds) of rpc client connect to rpc server. |
rpc.client_load_balancer | consistentHash | The rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is ‘consistentHash’, means forwarding by request parameters. |
rpc.client_read_timeout | 40 | The timeout(in seconds) of rpc client read from rpc server. |
rpc.client_reconnect_period | 10 | The period(in seconds) of rpc client reconnect to rpc server. |
rpc.client_retries | 3 | Failed retry number of rpc client calls to rpc server. |
rpc.config_order | 999 | Sofa rpc configuration file loading order, the larger the more later loading. |
rpc.logger_impl | com.alipay.sofa.rpc.log.SLF4JLoggerImpl | Sofa rpc log implementation class. |
rpc.protocol | bolt | Rpc communication protocol, client and server need to be specified the same value. |
rpc.remote_url | The remote urls of rpc peers, it can be set to multiple addresses, which are concat by ‘,’, empty value means not enabled. | |
rpc.server_adaptive_port | false | Whether the bound port is adaptive, if it’s enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts. |
rpc.server_host | The hosts/ips bound by rpc server to provide services, empty value means not enabled. | |
rpc.server_port | 8090 | The port bound by rpc server to provide services. |
rpc.server_timeout | 30 | The timeout(in seconds) of rpc server execution. |
config option | default value | description |
---|---|---|
backend | Must be set to cassandra . | |
serializer | Must be set to cassandra . | |
cassandra.host | localhost | The seeds hostname or ip address of cassandra cluster. |
cassandra.port | 9042 | The seeds port address of cassandra cluster. |
cassandra.connect_timeout | 5 | The cassandra driver connect server timeout(seconds). |
cassandra.read_timeout | 20 | The cassandra driver read from server timeout(seconds). |
cassandra.keyspace.strategy | SimpleStrategy | The replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy. |
cassandra.keyspace.replication | [3] | The keyspace replication factor of SimpleStrategy, like ‘[3]’.Or replicas in each datacenter of NetworkTopologyStrategy, like ‘[dc1:2,dc2:1]’. |
cassandra.username | The username to use to login to cassandra cluster. | |
cassandra.password | The password corresponding to cassandra.username. | |
cassandra.compression_type | none | The compression algorithm of cassandra transport: none/snappy/lz4. |
cassandra.jmx_port=7199 | 7199 | The port of JMX API service for cassandra. |
cassandra.aggregation_timeout | 43200 | The timeout in seconds of waiting for aggregation. |
config option | default value | description |
---|---|---|
backend | Must be set to scylladb . | |
serializer | Must be set to scylladb . |
Other options are consistent with the Cassandra backend.
config option | default value | description |
---|---|---|
backend | Must be set to rocksdb . | |
serializer | Must be set to binary . | |
rocksdb.data_disks | [] | The optimized disks for storing data of RocksDB. The format of each element: STORE/TABLE: /path/disk .Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap] |
rocksdb.data_path | rocksdb-data | The path for storing data of RocksDB. |
rocksdb.wal_path | rocksdb-data | The path for storing WAL of RocksDB. |
rocksdb.allow_mmap_reads | false | Allow the OS to mmap file for reading sst tables. |
rocksdb.allow_mmap_writes | false | Allow the OS to mmap file for writing. |
rocksdb.block_cache_capacity | 8388608 | The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache. |
rocksdb.bloom_filter_bits_per_key | -1 | The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter. |
rocksdb.bloom_filter_block_based_mode | false | Use block based filter rather than full filter. |
rocksdb.bloom_filter_whole_key_filtering | true | True if place whole keys in the bloom filter, else place the prefix of keys. |
rocksdb.bottommost_compression | NO_COMPRESSION | The compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd. |
rocksdb.bulkload_mode | false | Switch to the mode to bulk load data into RocksDB. |
rocksdb.cache_index_and_filter_blocks | false | Indicating if we’d put index/filter blocks to the block cache. |
rocksdb.compaction_style | LEVEL | Set compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO. |
rocksdb.compression | SNAPPY_COMPRESSION | The compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd. |
rocksdb.compression_per_level | [NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION] | The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd. |
rocksdb.delayed_write_rate | 16777216 | The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind. |
rocksdb.log_level | INFO | The info log level of RocksDB. |
rocksdb.max_background_jobs | 8 | Maximum number of concurrent background jobs, including flushes and compactions. |
rocksdb.level_compaction_dynamic_level_bytes | false | Whether to enable level_compaction_dynamic_level_bytes, if it’s enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it’s not recommended. |
rocksdb.max_bytes_for_level_base | 536870912 | The upper-bound of the total size of level-1 files in bytes. |
rocksdb.max_bytes_for_level_multiplier | 10.0 | The ratio between the total size of level (L+1) files and the total size of level L files for all L. |
rocksdb.max_open_files | -1 | The maximum number of open files that can be cached by RocksDB, -1 means no limit. |
rocksdb.max_subcompactions | 4 | The value represents the maximum number of threads per compaction job. |
rocksdb.max_write_buffer_number | 6 | The maximum number of write buffers that are built up in memory. |
rocksdb.max_write_buffer_number_to_maintain | 0 | The total maximum number of write buffers to maintain in memory. |
rocksdb.min_write_buffer_number_to_merge | 2 | The minimum number of write buffers that will be merged together. |
rocksdb.num_levels | 7 | Set the number of levels for this database. |
rocksdb.optimize_filters_for_hits | false | This flag allows us to not store filters for the last level. |
rocksdb.optimize_mode | true | Optimize for heavy workloads and big datasets. |
rocksdb.pin_l0_filter_and_index_blocks_in_cache | false | Indicating if we’d put index/filter blocks to the block cache. |
rocksdb.sst_path | The path for ingesting SST file into RocksDB. | |
rocksdb.target_file_size_base | 67108864 | The target file size for compaction in bytes. |
rocksdb.target_file_size_multiplier | 1 | The size ratio between a level L file and a level (L+1) file. |
rocksdb.use_direct_io_for_flush_and_compaction | false | Enable the OS to use direct read/writes in flush and compaction. |
rocksdb.use_direct_reads | false | Enable the OS to use direct I/O for reading sst tables. |
rocksdb.write_buffer_size | 134217728 | Amount of data in bytes to build up in memory. |
rocksdb.max_manifest_file_size | 104857600 | The max size of manifest file in bytes. |
rocksdb.skip_stats_update_on_db_open | false | Whether to skip statistics update when opening the database, setting this flag true allows us to not update statistics. |
rocksdb.max_file_opening_threads | 16 | The max number of threads used to open files. |
rocksdb.max_total_wal_size | 0 | Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit. |
rocksdb.db_write_buffer_size | 0 | Total size of write buffers in bytes across all column families, 0 means no limit. |
rocksdb.delete_obsolete_files_period | 21600 | The periodicity in seconds when obsolete files get deleted, 0 means always do full purge. |
rocksdb.hard_pending_compaction_bytes_limit | 274877906944 | The hard limit to impose on pending compaction in bytes. |
rocksdb.level0_file_num_compaction_trigger | 2 | Number of files to trigger level-0 compaction. |
rocksdb.level0_slowdown_writes_trigger | 20 | Soft limit on number of level-0 files for slowing down writes. |
rocksdb.level0_stop_writes_trigger | 36 | Hard limit on number of level-0 files for stopping writes. |
rocksdb.soft_pending_compaction_bytes_limit | 68719476736 | The soft limit to impose on pending compaction in bytes. |
config option | default value | description |
---|---|---|
backend | Must be set to hbase . | |
serializer | Must be set to hbase . | |
hbase.hosts | localhost | The hostnames or ip addresses of HBase zookeeper, separated with commas. |
hbase.port | 2181 | The port address of HBase zookeeper. |
hbase.threads_max | 64 | The max threads num of hbase connections. |
hbase.znode_parent | /hbase | The znode parent path of HBase zookeeper. |
hbase.zk_retry | 3 | The recovery retry times of HBase zookeeper. |
hbase.aggregation_timeout | 43200 | The timeout in seconds of waiting for aggregation. |
hbase.kerberos_enable | false | Is Kerberos authentication enabled for HBase. |
hbase.kerberos_keytab | The HBase’s key tab file for kerberos authentication. | |
hbase.kerberos_principal | The HBase’s principal for kerberos authentication. | |
hbase.krb5_conf | etc/krb5.conf | Kerberos configuration file, including KDC IP, default realm, etc. |
hbase.hbase_site | /etc/hbase/conf/hbase-site.xml | The HBase’s configuration file |
hbase.enable_partition | true | Is pre-split partitions enabled for HBase. |
hbase.vertex_partitions | 10 | The number of partitions of the HBase vertex table. |
hbase.edge_partitions | 30 | The number of partitions of the HBase edge table. |
config option | default value | description |
---|---|---|
backend | Must be set to mysql . | |
serializer | Must be set to mysql . | |
jdbc.driver | com.mysql.jdbc.Driver | The JDBC driver class to connect database. |
jdbc.url | jdbc:mysql://127.0.0.1:3306 | The url of database in JDBC format. |
jdbc.username | root | The username to login database. |
jdbc.password | ****** | The password corresponding to jdbc.username. |
jdbc.ssl_mode | false | The SSL mode of connections with database. |
jdbc.reconnect_interval | 3 | The interval(seconds) between reconnections when the database connection fails. |
jdbc.reconnect_max_times | 3 | The reconnect times when the database connection fails. |
jdbc.storage_engine | InnoDB | The storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL. |
jdbc.postgresql.connect_database | template1 | The database used to connect when init store, drop store or check store exist. |
config option | default value | description |
---|---|---|
backend | Must be set to postgresql . | |
serializer | Must be set to postgresql . |
Other options are consistent with the MySQL backend.
The driver and url of the PostgreSQL backend should be set to:
jdbc.driver=org.postgresql.Driver
jdbc.url=jdbc:postgresql://localhost:5432/
To facilitate authentication usage in different user scenarios, HugeGraph currently provides two built-in authorization modes:
ConfigAuthenticator
mode, which stores usernames and passwords in a local configuration file (supports only a single GraphServer).StandardAuthenticator
mode, which supports multi-user authentication and fine-grained access control. It adopts a 4-layer design based on “User-UserGroup-Operation-Resource” to flexibly control user roles and permissions (supports multiple GraphServers).Some key designs of the StandardAuthenticator
mode include:
admin
) user is created. Subsequently, other users can be created by the super administrator. Once newly created users are assigned sufficient permissions, they can create or manage more users.type
, label
, and properties
. There are 18 types in total, with the ability to combine any label and properties. The internal condition of a resource is an AND relationship, while the condition between multiple resources is an OR relationship.Here is an example to illustrate:
// Scenario: A user only has data read permission for the Beijing area
user(name=xx) -belong-> group(name=xx) -access(read)-> target(graph=graph1, resource={label: person, city: Beijing})
-
HugeGraph 默认不启用用户认证功能,需通过修改配置文件来启用该功能。内置实现了StandardAuthenticator
和ConfigAuthenticator
两种模式,StandardAuthenticator
模式支持多用户认证与细粒度权限控制,ConfigAuthenticator
模式支持简单的用户权限认证。此外,开发者可以自定义实现HugeAuthenticator
接口来对接自身的权限系统。
用户认证方式均采用 HTTP Basic Authentication ,简单说就是在发送 HTTP 请求时在 Authentication
设置选择 Basic
然后输入对应的用户名和密码,对应 HTTP 明文如下所示 :
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+
By default, HugeGraph does not enable user authentication. You need to modify the configuration file to enable this feature. HugeGraph provides two built-in authentication modes: StandardAuthenticator
and ConfigAuthenticator
. The StandardAuthenticator
mode supports multi-user authentication and fine-grained permission control, while the ConfigAuthenticator
mode supports simple user permission authentication. Additionally, developers can implement their own HugeAuthenticator
interface to integrate with their existing authentication systems.
Both authentication modes adopt HTTP Basic Authentication. In simple terms, when sending an HTTP request, you need to set the Authentication
header to Basic
and provide the corresponding username and password. The corresponding HTTP plaintext format is as follows:
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
Authorization: Basic admin xxxx
-
StandardAuthenticator
模式是通过在数据库后端存储用户信息来支持用户认证和权限控制,该实现基于数据库存储的用户的名称与密码进行认证(密码已被加密),基于用户的角色来细粒度控制用户权限。下面是具体的配置流程(重启服务生效):
在配置文件gremlin-server.yaml
中配置authenticator
及其rest-server
文件路径:
authentication: {
+
The StandardAuthenticator
mode supports user authentication and permission control by storing user information in the database backend. This implementation authenticates users based on their names and passwords (encrypted) stored in the database and controls user permissions based on their roles. Below is the specific configuration process (requires service restart):
Configure the authenticator
and its rest-server
file path in the gremlin-server.yaml
configuration file:
authentication: {
authenticator: com.baidu.hugegraph.auth.StandardAuthenticator,
authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: conf/rest-server.properties}
}
-
在配置文件rest-server.properties
中配置authenticator
及其graph_store
信息:
auth.authenticator=com.baidu.hugegraph.auth.StandardAuthenticator
-auth.graph_store=hugegraph
-
-# auth client config
-# 如果是分开部署 GraphServer 和 AuthServer, 还需要指定下面的配置, 地址填写 AuthServer 的 IP:RPC 端口
-#auth.remote_url=127.0.0.1:8899,127.0.0.1:8898,127.0.0.1:8897
-
其中,graph_store
配置项是指使用哪一个图来存储用户信息,如果存在多个图的话,选取任意一个均可。
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-
然后详细的权限 API 调用和说明请参考 Authentication-API 文档
ConfigAuthenticator
模式是通过预先在配置文件中设置用户信息来支持用户认证,该实现是基于配置好的静态tokens
来验证用户是否合法。下面是具体的配置流程(重启服务生效):
在配置文件gremlin-server.yaml
中配置authenticator
及其rest-server
文件路径:
authentication: {
+
Configure the authenticator
and graph_store
information in the rest-server.properties
configuration file:
auth.authenticator=com.baidu.hugegraph.auth.StandardAuthenticator
+auth.graph_store=hugegraph
+
+# Auth Client Config
+# If GraphServer and AuthServer are deployed separately, you also need to specify the following configuration. Fill in the IP:RPC port of AuthServer.
+# auth.remote_url=127.0.0.1:8899,127.0.0.1:8898,127.0.0.1:8897
+
In the above configuration, the graph_store
option specifies which graph to use for storing user information. If there are multiple graphs, you can choose any of them.
In the hugegraph{n}.properties
configuration file, configure the gremlin.graph
information:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+
For detailed API calls and explanations regarding permissions, please refer to the Authentication-API documentation.
The ConfigAuthenticator
mode supports user authentication by predefining user information in the configuration file. This implementation verifies the legitimacy of users based on preconfigured static tokens
. Below is the specific configuration process (requires service restart):
Configure the authenticator
and its rest-server
file path in the gremlin-server.yaml
configuration file:
authentication: {
authenticator: com.baidu.hugegraph.auth.ConfigAuthenticator,
authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: conf/rest-server.properties}
}
-
在配置文件rest-server.properties
中配置authenticator
及其tokens
信息:
auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
-auth.admin_token=token-value-a
-auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
-
在配置文件hugegraph{n}.properties
中配置gremlin.graph
信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-
如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口com.baidu.hugegraph.auth.HugeAuthenticator
即可,然后修改配置文件中authenticator
配置项指向该实现。
HugeGraphServer 默认使用的是 http 协议,如果用户对请求的安全性有要求,可以配置成 https。
修改 conf/rest-server.properties 配置文件,将 restserver.url 的 schema 部分改为 https。
# 将协议设置为 https
+
Configure the authenticator
and its tokens
information in the rest-server.properties
configuration file:
auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
+auth.admin_token=token-value-a
+auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
+
In the hugegraph{n}.properties
configuration file, configure the gremlin.graph
information:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+
If you need to support a more flexible user system, you can customize the authenticator for extension. Simply implement the com.baidu.hugegraph.auth.HugeAuthenticator
interface with your custom authenticator, and then modify the authenticator
configuration item in the configuration file to point to your implementation.
By default, HugeGraphServer uses the HTTP protocol. However, if you have security requirements for your requests, you can configure it to use HTTPS.
Modify the conf/rest-server.properties
configuration file and change the schema part of restserver.url
to https
.
# Set the protocol to HTTPS
restserver.url=https://127.0.0.1:8080
-# 服务端 keystore 文件路径,当协议为 https 时该默认值自动生效,可按需修改此项
+# Server keystore file path. This default value is automatically effective when using HTTPS, and you can modify it as needed.
ssl.keystore_file=conf/hugegraph-server.keystore
-# 服务端 keystore 文件密码,当协议为 https 时该默认值自动生效,可按需修改此项
+# Server keystore file password. This default value is automatically effective when using HTTPS, and you can modify it as needed.
ssl.keystore_password=******
-
服务端的 conf 目录下已经给出了一个 keystore 文件hugegraph-server.keystore
,该文件的密码为hugegraph
,
-这两项都是在开启了 https 协议时的默认值,用户可以生成自己的 keystore 文件及密码,然后修改ssl.keystore_file
和ssl.keystore_password
的值。
在构造 HugeClient 时传入 https 相关的配置,代码示例:
String url = "https://localhost:8080";
+
The server’s conf
directory already includes a keystore file named hugegraph-server.keystore
, and the password for this file is hugegraph
. These are the default values when enabling the HTTPS protocol. Users can generate their own keystore file and password, and then modify the values of ssl.keystore_file
and ssl.keystore_password
.
When constructing a HugeClient, pass the HTTPS-related configurations. Here’s an example in Java:
String url = "https://localhost:8080";
String graphName = "hugegraph";
HugeClientBuilder builder = HugeClient.builder(url, graphName);
-// 客户端 keystore 文件路径
+// Client keystore file path
String trustStoreFilePath = "hugegraph.truststore";
-// 客户端 keystore 密码
+// Client keystore password
String trustStorePassword = "******";
builder.configSSL(trustStoreFilePath, trustStorePassword);
HugeClient hugeClient = builder.build();
-
注意:HugeGraph-Client 在 1.9.0 版本以前是直接以 new 的方式创建,并且不支持 https 协议,在 1.9.0 版本以后改成以 builder 的方式创建,并支持配置 https 协议。
启动导入任务时,在命令行中添加如下选项:
# https
+
Note: Before version 1.9.0, HugeGraph-Client was created directly using the
new
keyword and did not support the HTTPS protocol. Starting from version 1.9.0, it changed to use the builder pattern and supports configuring the HTTPS protocol.
When starting an import task, add the following options in the command line:
# HTTPS
--protocol https
-# 客户端证书文件路径,当指定 --protocol 为 https 时,默认值 conf/hugegraph.truststore 自动生效,可按需修改
+# Client certificate file path. When specifying --protocol as https, the default value conf/hugegraph.truststore is automatically used, and you can modify it as needed.
--trust-store-file {file}
-# 客户端证书文件密码,当指定 --protocol 为 https 时,默认值 hugegraph 自动生效,可按需修改
+# Client certificate file password. When specifying --protocol as https, the default value hugegraph is automatically used, and you can modify it as needed.
--trust-store-password {password}
-
hugegraph-loader 的 conf 目录下已经放了一个默认的客户端证书文件 hugegraph.truststore,其密码是 hugegraph。
执行命令时,在命令行中添加如下选项:
# 客户端证书文件路径,当 url 中使用 https 协议时,默认值 conf/hugegraph.truststore 自动生效,可按需修改
+
Under the conf
directory of hugegraph-loader, there is already a default client certificate file named hugegraph.truststore
, and its password is hugegraph
.
When executing commands, add the following options in the command line:
# Client certificate file path. When using the HTTPS protocol in the URL, the default value conf/hugegraph.truststore is automatically used, and you can modify it as needed.
--trust-store-file {file}
-# 客户端证书文件密码,当 url 中使用 https 协议时,默认值 hugegraph 自动生效,可按需修改
+# Client certificate file password. When using the HTTPS protocol in the URL, the default value hugegraph is automatically used, and you can modify it as needed.
--trust-store-password {password}
-# 执行迁移命令时,当 --target-url 中使用 https 协议时,默认值 conf/hugegraph.truststore 自动生效,可按需修改
+# When executing migration commands and using the --target-url with the HTTPS protocol, the default value conf/hugegraph.truststore is automatically used, and you can modify it as needed.
--target-trust-store-file {target-file}
-# 执行迁移命令时,当 --target-url 中使用 https 协议时,默认值 hugegraph 自动生效,可按需修改
+# When executing migration commands and using the --target-url with the HTTPS protocol, the default value hugegraph is automatically used, and you can modify it as needed.
--target-trust-store-password {target-password}
-
hugegraph-tools 的 conf 目录下已经放了一个默认的客户端证书文件 hugegraph.truststore,其密码是 hugegraph。
本部分给出生成证书的示例,如果默认的证书已经够用,或者已经知晓如何生成,可跳过。
keytool -genkey -alias serverkey -keyalg RSA -keystore server.keystore
-
过程中根据需求填写描述信息,默认证书的描述信息如下:
名字和姓⽒:hugegraph
-组织单位名称:hugegraph
-组织名称:hugegraph
-城市或区域名称:BJ
-州或省份名称:BJ
-国家代码:CN
-
keytool -export -alias serverkey -keystore server.keystore -file server.crt
-
server.crt 就是服务端的证书
keytool -import -alias serverkey -file server.crt -keystore client.truststore
-
client.truststore 是给客户端⽤的,其中保存着受信任的证书
config option | default value | description |
---|---|---|
algorithm.message_class | org.apache.hugegraph.computer.core.config.Null | The class of message passed when compute vertex. |
algorithm.params_class | org.apache.hugegraph.computer.core.config.Null | The class used to transfer algorithms’ parameters before algorithm been run. |
algorithm.result_class | org.apache.hugegraph.computer.core.config.Null | The class of vertex’s value, the instance is used to store computation result for the vertex. |
allocator.max_vertices_per_thread | 10000 | Maximum number of vertices per thread processed in each memory allocator |
bsp.etcd_endpoints | http://localhost:2379 | The end points to access etcd. |
bsp.log_interval | 30000 | The log interval(in ms) to print the log while waiting bsp event. |
bsp.max_super_step | 10 | The max super step of the algorithm. |
bsp.register_timeout | 300000 | The max timeout to wait for master and works to register. |
bsp.wait_master_timeout | 86400000 | The max timeout(in ms) to wait for master bsp event. |
bsp.wait_workers_timeout | 86400000 | The max timeout to wait for workers bsp event. |
hgkv.max_data_block_size | 65536 | The max byte size of hgkv-file data block. |
hgkv.max_file_size | 2147483648 | The max number of bytes in each hgkv-file. |
hgkv.max_merge_files | 10 | The max number of files to merge at one time. |
hgkv.temp_file_dir | /tmp/hgkv | This folder is used to store temporary files, temporary files will be generated during the file merging process. |
hugegraph.name | hugegraph | The graph name to load data and write results back. |
hugegraph.url | http://127.0.0.1:8080 | The hugegraph url to load data and write results back. |
input.edge_direction | OUT | The data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded. |
input.edge_freq | MULTIPLE | The frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it. |
input.filter_class | org.apache.hugegraph.computer.core.input.filter.DefaultInputFilter | The class to create input-filter object, input-filter is used to Filter vertex edges according to user needs. |
input.loader_schema_path | The schema path of loader input, only takes effect when the input.source_type=loader is enabled | |
input.loader_struct_path | The struct path of loader input, only takes effect when the input.source_type=loader is enabled | |
input.max_edges_in_one_vertex | 200 | The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit. |
input.source_type | hugegraph-server | The source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’. |
input.split_fetch_timeout | 300 | The timeout in seconds to fetch input splits |
input.split_max_splits | 10000000 | The maximum number of input splits |
input.split_page_size | 500 | The page size for streamed load input split data |
input.split_size | 1048576 | The input split size in bytes |
job.id | local_0001 | The job id on Yarn cluster or K8s cluster. |
job.partitions_count | 1 | The partitions count for computing one graph algorithm job. |
job.partitions_thread_nums | 4 | The number of threads for partition parallel compute. |
job.workers_count | 1 | The workers count for computing one graph algorithm job. |
master.computation_class | org.apache.hugegraph.computer.core.master.DefaultMasterComputation | Master-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master. |
output.batch_size | 500 | The batch size of output |
output.batch_threads | 1 | The threads number used to batch output |
output.hdfs_core_site_path | The hdfs core site path. | |
output.hdfs_delimiter | , | The delimiter of hdfs output. |
output.hdfs_kerberos_enable | false | Is Kerberos authentication enabled for Hdfs. |
output.hdfs_kerberos_keytab | The Hdfs’s key tab file for kerberos authentication. | |
output.hdfs_kerberos_principal | The Hdfs’s principal for kerberos authentication. | |
output.hdfs_krb5_conf | /etc/krb5.conf | Kerberos configuration file. |
output.hdfs_merge_partitions | true | Whether merge output files of multiple partitions. |
output.hdfs_path_prefix | /hugegraph-computer/results | The directory of hdfs output result. |
output.hdfs_replication | 3 | The replication number of hdfs. |
output.hdfs_site_path | The hdfs site path. | |
output.hdfs_url | hdfs://127.0.0.1:9000 | The hdfs url of output. |
output.hdfs_user | hadoop | The hdfs user of output. |
output.output_class | org.apache.hugegraph.computer.core.output.LogOutput | The class to output the computation result of each vertex. Be called after iteration computation. |
output.result_name | value | The value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS. |
output.result_write_type | OLAP_COMMON | The result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE]. |
output.retry_interval | 10 | The retry interval when output failed |
output.retry_times | 3 | The retry times when output failed |
output.single_threads | 1 | The threads number used to single output |
output.thread_pool_shutdown_timeout | 60 | The timeout seconds of output threads pool shutdown |
output.with_adjacent_edges | false | Output the adjacent edges of the vertex or not |
output.with_edge_properties | false | Output the properties of the edge or not |
output.with_vertex_properties | false | Output the properties of the vertex or not |
sort.thread_nums | 4 | The number of threads performing internal sorting. |
transport.client_connect_timeout | 3000 | The timeout(in ms) of client connect to server. |
transport.client_threads | 4 | The number of transport threads for client. |
transport.close_timeout | 10000 | The timeout(in ms) of close server or close client. |
transport.finish_session_timeout | 0 | The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests). |
transport.heartbeat_interval | 20000 | The minimum interval(in ms) between heartbeats on client side. |
transport.io_mode | AUTO | The network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically. |
transport.max_pending_requests | 8 | The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests. |
transport.max_syn_backlog | 511 | The capacity of SYN queue on server side, 0 means using system default value. |
transport.max_timeout_heartbeat_count | 120 | The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side. |
transport.min_ack_interval | 200 | The minimum interval(in ms) of server reply ack. |
transport.min_pending_requests | 6 | The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests. |
transport.network_retries | 3 | The number of retry attempts for network communication,if network unstable. |
transport.provider_class | org.apache.hugegraph.computer.core.network.netty.NettyTransportProvider | The transport provider, currently only supports Netty. |
transport.receive_buffer_size | 0 | The size of socket receive-buffer in bytes, 0 means using system default value. |
transport.recv_file_mode | true | Whether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable. |
transport.send_buffer_size | 0 | The size of socket send-buffer in bytes, 0 means using system default value. |
transport.server_host | 127.0.0.1 | The server hostname or ip to listen on to transfer data. |
transport.server_idle_timeout | 360000 | The max timeout(in ms) of server idle. |
transport.server_port | 0 | The server port to listen on to transfer data. The system will assign a random port if it’s set to 0. |
transport.server_threads | 4 | The number of transport threads for server. |
transport.sync_request_timeout | 10000 | The timeout(in ms) to wait response after sending sync-request. |
transport.tcp_keep_alive | true | Whether enable TCP keep-alive. |
transport.transport_epoll_lt | false | Whether enable EPOLL level-trigger. |
transport.write_buffer_high_mark | 67108864 | The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark. |
transport.write_buffer_low_mark | 33554432 | The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b |
transport.write_socket_timeout | 3000 | The timeout(in ms) to write data to socket buffer. |
valuefile.max_segment_size | 1073741824 | The max number of bytes in each segment of value-file. |
worker.combiner_class | org.apache.hugegraph.computer.core.config.Null | Combiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value. |
worker.computation_class | org.apache.hugegraph.computer.core.config.Null | The class to create worker-computation object, worker-computation is used to compute each vertex in each superstep. |
worker.data_dirs | [jobs] | The directories separated by ‘,’ that received vertices and messages can persist into. |
worker.edge_properties_combiner_class | org.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombiner | The combiner can combine several properties of the same edge into one properties at inputstep. |
worker.partitioner | org.apache.hugegraph.computer.core.graph.partition.HashPartitioner | The partitioner that decides which partition a vertex should be in, and which worker a partition should be in. |
worker.received_buffers_bytes_limit | 104857600 | The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file. |
worker.vertex_properties_combiner_class | org.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombiner | The combiner can combine several properties of the same vertex into one properties at inputstep. |
worker.wait_finish_messages_timeout | 86400000 | The max timeout(in ms) message-handler wait for finish-message of all workers. |
worker.wait_sort_timeout | 600000 | The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers. |
worker.write_buffer_capacity | 52428800 | The initial size of write buffer that used to store vertex or message. |
worker.write_buffer_threshold | 52428800 | The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message. |
NOTE: Option needs to be converted through environment variable settings, e.g. k8s.internal_etcd_url => INTERNAL_ETCD_URL
config option | default value | description |
---|---|---|
k8s.auto_destroy_pod | true | Whether to automatically destroy all pods when the job is completed or failed. |
k8s.close_reconciler_timeout | 120 | The max timeout(in ms) to close reconciler. |
k8s.internal_etcd_url | http://127.0.0.1:2379 | The internal etcd url for operator system. |
k8s.max_reconcile_retry | 3 | The max retry times of reconcile. |
k8s.probe_backlog | 50 | The maximum backlog for serving health probes. |
k8s.probe_port | 9892 | The value is the port that the controller bind to for serving health probes. |
k8s.ready_check_internal | 1000 | The time interval(ms) of check ready. |
k8s.ready_timeout | 30000 | The max timeout(in ms) of check ready. |
k8s.reconciler_count | 10 | The max number of reconciler thread. |
k8s.resync_period | 600000 | The minimum frequency at which watched resources are reconciled. |
k8s.timezone | Asia/Shanghai | The timezone of computer job and operator. |
k8s.watch_namespace | hugegraph-computer-system | The value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched. |
spec | default value | description | required |
---|---|---|---|
algorithmName | The name of algorithm. | true | |
jobId | The job id. | true | |
image | The image of algorithm. | true | |
computerConf | The map of computer config options. | true | |
workerInstances | The number of worker instances, it will instead the ‘job.workers_count’ option. | true | |
pullPolicy | Always | The pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy | false |
pullSecrets | The pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | false | |
masterCpu | The cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu | false | |
workerCpu | The cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu | false | |
masterMemory | The memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory | false | |
workerMemory | The memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory | false | |
log4jXml | The content of log4j.xml for computer job. | false | |
jarFile | The jar path of computer algorithm. | false | |
remoteJarUri | The remote jar uri of computer algorithm, it will overlay algorithm image. | false | |
jvmOptions | The java startup parameters of computer job. | false | |
envVars | please refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/ | false | |
envFrom | please refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/ | false | |
masterCommand | bin/start-computer.sh | The run command of master, equivalent to ‘Entrypoint’ field of Docker. | false |
masterArgs | ["-r master", “-d k8s”] | The run args of master, equivalent to ‘Cmd’ field of Docker. | false |
workerCommand | bin/start-computer.sh | The run command of worker, equivalent to ‘Entrypoint’ field of Docker. | false |
workerArgs | ["-r worker", “-d k8s”] | The run args of worker, equivalent to ‘Cmd’ field of Docker. | false |
volumes | Please refer to: https://kubernetes.io/docs/concepts/storage/volumes/ | false | |
volumeMounts | Please refer to: https://kubernetes.io/docs/concepts/storage/volumes/ | false | |
secretPaths | The map of k8s-secret name and mount path. | false | |
configMapPaths | The map of k8s-configmap name and mount path. | false | |
podTemplateSpec | Please refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpec | false | |
securityContext | Please refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ | false |
config option | default value | description |
---|---|---|
k8s.build_image_bash_path | The path of command used to build image. | |
k8s.enable_internal_algorithm | true | Whether enable internal algorithm. |
k8s.framework_image_url | hugegraph/hugegraph-computer:latest | The image url of computer framework. |
k8s.image_repository_password | The password for login image repository. | |
k8s.image_repository_registry | The address for login image repository. | |
k8s.image_repository_url | hugegraph/hugegraph-computer | The url of image repository. |
k8s.image_repository_username | The username for login image repository. | |
k8s.internal_algorithm | [pageRank] | The name list of all internal algorithm. |
k8s.internal_algorithm_image_url | hugegraph/hugegraph-computer:latest | The image url of internal algorithm. |
k8s.jar_file_dir | /cache/jars/ | The directory where the algorithm jar to upload location. |
k8s.kube_config | ~/.kube/config | The path of k8s config file. |
k8s.log4j_xml_path | The log4j.xml path for computer job. | |
k8s.namespace | hugegraph-computer-system | The namespace of hugegraph-computer system. |
k8s.pull_secret_names | [] | The names of pull-secret for pulling image. |
HugeGraph-Server通过HugeGraph-API基于HTTP协议为Client提供操作图的接口,主要包括元数据和 -图数据的增删改查,遍历算法,变量,图操作及其他操作。
HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。
GET http://localhost:8080/graphs/hugegraph/schema
-
200
+
Under the conf
directory of hugegraph-tools, there is already a default client certificate file named hugegraph.truststore
, and its password is hugegraph
.
This section provides an example of generating certificates. If the default certificate is sufficient or if you already know how to generate certificates, you can skip this section.
server.keystore
is for the server’s use and contains its private key.keytool -genkey -alias serverkey -keyalg RSA -keystore server.keystore
+
During the process, fill in the description information according to your requirements. The description information for the default certificate is as follows:
First and Last Name: hugegraph
+Organizational Unit Name: hugegraph
+Organization Name: hugegraph
+City or Locality Name: BJ
+State or Province Name: BJ
+Country Code: CN
+
keytool -export -alias serverkey -keystore server.keystore -file server.crt
+
server.crt
is the server’s certificate.
keytool -import -alias serverkey -file server.crt -keystore client.truststore
+
client.truststore
is for the client’s use and contains the trusted certificate.
config option | default value | description |
---|---|---|
algorithm.message_class | org.apache.hugegraph.computer.core.config.Null | The class of message passed when compute vertex. |
algorithm.params_class | org.apache.hugegraph.computer.core.config.Null | The class used to transfer algorithms’ parameters before algorithm been run. |
algorithm.result_class | org.apache.hugegraph.computer.core.config.Null | The class of vertex’s value, the instance is used to store computation result for the vertex. |
allocator.max_vertices_per_thread | 10000 | Maximum number of vertices per thread processed in each memory allocator |
bsp.etcd_endpoints | http://localhost:2379 | The end points to access etcd. |
bsp.log_interval | 30000 | The log interval(in ms) to print the log while waiting bsp event. |
bsp.max_super_step | 10 | The max super step of the algorithm. |
bsp.register_timeout | 300000 | The max timeout to wait for master and works to register. |
bsp.wait_master_timeout | 86400000 | The max timeout(in ms) to wait for master bsp event. |
bsp.wait_workers_timeout | 86400000 | The max timeout to wait for workers bsp event. |
hgkv.max_data_block_size | 65536 | The max byte size of hgkv-file data block. |
hgkv.max_file_size | 2147483648 | The max number of bytes in each hgkv-file. |
hgkv.max_merge_files | 10 | The max number of files to merge at one time. |
hgkv.temp_file_dir | /tmp/hgkv | This folder is used to store temporary files, temporary files will be generated during the file merging process. |
hugegraph.name | hugegraph | The graph name to load data and write results back. |
hugegraph.url | http://127.0.0.1:8080 | The hugegraph url to load data and write results back. |
input.edge_direction | OUT | The data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded. |
input.edge_freq | MULTIPLE | The frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it. |
input.filter_class | org.apache.hugegraph.computer.core.input.filter.DefaultInputFilter | The class to create input-filter object, input-filter is used to Filter vertex edges according to user needs. |
input.loader_schema_path | The schema path of loader input, only takes effect when the input.source_type=loader is enabled | |
input.loader_struct_path | The struct path of loader input, only takes effect when the input.source_type=loader is enabled | |
input.max_edges_in_one_vertex | 200 | The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit. |
input.source_type | hugegraph-server | The source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’. |
input.split_fetch_timeout | 300 | The timeout in seconds to fetch input splits |
input.split_max_splits | 10000000 | The maximum number of input splits |
input.split_page_size | 500 | The page size for streamed load input split data |
input.split_size | 1048576 | The input split size in bytes |
job.id | local_0001 | The job id on Yarn cluster or K8s cluster. |
job.partitions_count | 1 | The partitions count for computing one graph algorithm job. |
job.partitions_thread_nums | 4 | The number of threads for partition parallel compute. |
job.workers_count | 1 | The workers count for computing one graph algorithm job. |
master.computation_class | org.apache.hugegraph.computer.core.master.DefaultMasterComputation | Master-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master. |
output.batch_size | 500 | The batch size of output |
output.batch_threads | 1 | The threads number used to batch output |
output.hdfs_core_site_path | The hdfs core site path. | |
output.hdfs_delimiter | , | The delimiter of hdfs output. |
output.hdfs_kerberos_enable | false | Is Kerberos authentication enabled for Hdfs. |
output.hdfs_kerberos_keytab | The Hdfs’s key tab file for kerberos authentication. | |
output.hdfs_kerberos_principal | The Hdfs’s principal for kerberos authentication. | |
output.hdfs_krb5_conf | /etc/krb5.conf | Kerberos configuration file. |
output.hdfs_merge_partitions | true | Whether merge output files of multiple partitions. |
output.hdfs_path_prefix | /hugegraph-computer/results | The directory of hdfs output result. |
output.hdfs_replication | 3 | The replication number of hdfs. |
output.hdfs_site_path | The hdfs site path. | |
output.hdfs_url | hdfs://127.0.0.1:9000 | The hdfs url of output. |
output.hdfs_user | hadoop | The hdfs user of output. |
output.output_class | org.apache.hugegraph.computer.core.output.LogOutput | The class to output the computation result of each vertex. Be called after iteration computation. |
output.result_name | value | The value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS. |
output.result_write_type | OLAP_COMMON | The result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE]. |
output.retry_interval | 10 | The retry interval when output failed |
output.retry_times | 3 | The retry times when output failed |
output.single_threads | 1 | The threads number used to single output |
output.thread_pool_shutdown_timeout | 60 | The timeout seconds of output threads pool shutdown |
output.with_adjacent_edges | false | Output the adjacent edges of the vertex or not |
output.with_edge_properties | false | Output the properties of the edge or not |
output.with_vertex_properties | false | Output the properties of the vertex or not |
sort.thread_nums | 4 | The number of threads performing internal sorting. |
transport.client_connect_timeout | 3000 | The timeout(in ms) of client connect to server. |
transport.client_threads | 4 | The number of transport threads for client. |
transport.close_timeout | 10000 | The timeout(in ms) of close server or close client. |
transport.finish_session_timeout | 0 | The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests). |
transport.heartbeat_interval | 20000 | The minimum interval(in ms) between heartbeats on client side. |
transport.io_mode | AUTO | The network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically. |
transport.max_pending_requests | 8 | The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests. |
transport.max_syn_backlog | 511 | The capacity of SYN queue on server side, 0 means using system default value. |
transport.max_timeout_heartbeat_count | 120 | The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side. |
transport.min_ack_interval | 200 | The minimum interval(in ms) of server reply ack. |
transport.min_pending_requests | 6 | The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests. |
transport.network_retries | 3 | The number of retry attempts for network communication,if network unstable. |
transport.provider_class | org.apache.hugegraph.computer.core.network.netty.NettyTransportProvider | The transport provider, currently only supports Netty. |
transport.receive_buffer_size | 0 | The size of socket receive-buffer in bytes, 0 means using system default value. |
transport.recv_file_mode | true | Whether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable. |
transport.send_buffer_size | 0 | The size of socket send-buffer in bytes, 0 means using system default value. |
transport.server_host | 127.0.0.1 | The server hostname or ip to listen on to transfer data. |
transport.server_idle_timeout | 360000 | The max timeout(in ms) of server idle. |
transport.server_port | 0 | The server port to listen on to transfer data. The system will assign a random port if it’s set to 0. |
transport.server_threads | 4 | The number of transport threads for server. |
transport.sync_request_timeout | 10000 | The timeout(in ms) to wait response after sending sync-request. |
transport.tcp_keep_alive | true | Whether enable TCP keep-alive. |
transport.transport_epoll_lt | false | Whether enable EPOLL level-trigger. |
transport.write_buffer_high_mark | 67108864 | The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark. |
transport.write_buffer_low_mark | 33554432 | The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b |
transport.write_socket_timeout | 3000 | The timeout(in ms) to write data to socket buffer. |
valuefile.max_segment_size | 1073741824 | The max number of bytes in each segment of value-file. |
worker.combiner_class | org.apache.hugegraph.computer.core.config.Null | Combiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value. |
worker.computation_class | org.apache.hugegraph.computer.core.config.Null | The class to create worker-computation object, worker-computation is used to compute each vertex in each superstep. |
worker.data_dirs | [jobs] | The directories separated by ‘,’ that received vertices and messages can persist into. |
worker.edge_properties_combiner_class | org.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombiner | The combiner can combine several properties of the same edge into one properties at inputstep. |
worker.partitioner | org.apache.hugegraph.computer.core.graph.partition.HashPartitioner | The partitioner that decides which partition a vertex should be in, and which worker a partition should be in. |
worker.received_buffers_bytes_limit | 104857600 | The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file. |
worker.vertex_properties_combiner_class | org.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombiner | The combiner can combine several properties of the same vertex into one properties at inputstep. |
worker.wait_finish_messages_timeout | 86400000 | The max timeout(in ms) message-handler wait for finish-message of all workers. |
worker.wait_sort_timeout | 600000 | The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers. |
worker.write_buffer_capacity | 52428800 | The initial size of write buffer that used to store vertex or message. |
worker.write_buffer_threshold | 52428800 | The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message. |
NOTE: Option needs to be converted through environment variable settings, e.g. k8s.internal_etcd_url => INTERNAL_ETCD_URL
config option | default value | description |
---|---|---|
k8s.auto_destroy_pod | true | Whether to automatically destroy all pods when the job is completed or failed. |
k8s.close_reconciler_timeout | 120 | The max timeout(in ms) to close reconciler. |
k8s.internal_etcd_url | http://127.0.0.1:2379 | The internal etcd url for operator system. |
k8s.max_reconcile_retry | 3 | The max retry times of reconcile. |
k8s.probe_backlog | 50 | The maximum backlog for serving health probes. |
k8s.probe_port | 9892 | The value is the port that the controller bind to for serving health probes. |
k8s.ready_check_internal | 1000 | The time interval(ms) of check ready. |
k8s.ready_timeout | 30000 | The max timeout(in ms) of check ready. |
k8s.reconciler_count | 10 | The max number of reconciler thread. |
k8s.resync_period | 600000 | The minimum frequency at which watched resources are reconciled. |
k8s.timezone | Asia/Shanghai | The timezone of computer job and operator. |
k8s.watch_namespace | hugegraph-computer-system | The value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched. |
spec | default value | description | required |
---|---|---|---|
algorithmName | The name of algorithm. | true | |
jobId | The job id. | true | |
image | The image of algorithm. | true | |
computerConf | The map of computer config options. | true | |
workerInstances | The number of worker instances, it will instead the ‘job.workers_count’ option. | true | |
pullPolicy | Always | The pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy | false |
pullSecrets | The pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | false | |
masterCpu | The cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu | false | |
workerCpu | The cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu | false | |
masterMemory | The memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory | false | |
workerMemory | The memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory | false | |
log4jXml | The content of log4j.xml for computer job. | false | |
jarFile | The jar path of computer algorithm. | false | |
remoteJarUri | The remote jar uri of computer algorithm, it will overlay algorithm image. | false | |
jvmOptions | The java startup parameters of computer job. | false | |
envVars | please refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/ | false | |
envFrom | please refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/ | false | |
masterCommand | bin/start-computer.sh | The run command of master, equivalent to ‘Entrypoint’ field of Docker. | false |
masterArgs | ["-r master", “-d k8s”] | The run args of master, equivalent to ‘Cmd’ field of Docker. | false |
workerCommand | bin/start-computer.sh | The run command of worker, equivalent to ‘Entrypoint’ field of Docker. | false |
workerArgs | ["-r worker", “-d k8s”] | The run args of worker, equivalent to ‘Cmd’ field of Docker. | false |
volumes | Please refer to: https://kubernetes.io/docs/concepts/storage/volumes/ | false | |
volumeMounts | Please refer to: https://kubernetes.io/docs/concepts/storage/volumes/ | false | |
secretPaths | The map of k8s-secret name and mount path. | false | |
configMapPaths | The map of k8s-configmap name and mount path. | false | |
podTemplateSpec | Please refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpec | false | |
securityContext | Please refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ | false |
config option | default value | description |
---|---|---|
k8s.build_image_bash_path | The path of command used to build image. | |
k8s.enable_internal_algorithm | true | Whether enable internal algorithm. |
k8s.framework_image_url | hugegraph/hugegraph-computer:latest | The image url of computer framework. |
k8s.image_repository_password | The password for login image repository. | |
k8s.image_repository_registry | The address for login image repository. | |
k8s.image_repository_url | hugegraph/hugegraph-computer | The url of image repository. |
k8s.image_repository_username | The username for login image repository. | |
k8s.internal_algorithm | [pageRank] | The name list of all internal algorithm. |
k8s.internal_algorithm_image_url | hugegraph/hugegraph-computer:latest | The image url of internal algorithm. |
k8s.jar_file_dir | /cache/jars/ | The directory where the algorithm jar to upload location. |
k8s.kube_config | ~/.kube/config | The path of k8s config file. |
k8s.log4j_xml_path | The log4j.xml path for computer job. | |
k8s.namespace | hugegraph-computer-system | The namespace of hugegraph-computer system. |
k8s.pull_secret_names | [] | The names of pull-secret for pulling image. |
HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.
HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.
GET http://localhost:8080/graphs/{graph_name}/schema
+
+e.g: GET http://localhost:8080/graphs/hugegraph/schema
+
200
{
"propertykeys": [
{
"id": 7,
"name": "price",
- "data_type": "INT",
+ "data_type": "DOUBLE",
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.741"
+ "~create_time": "2023-05-08 17:49:05.316"
}
},
{
@@ -1665,11 +1664,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.729"
+ "~create_time": "2023-05-08 17:49:05.309"
}
},
{
@@ -1679,11 +1677,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.691"
+ "~create_time": "2023-05-08 17:49:05.287"
}
},
{
@@ -1693,11 +1690,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.678"
+ "~create_time": "2023-05-08 17:49:05.280"
}
},
{
@@ -1707,11 +1703,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.718"
+ "~create_time": "2023-05-08 17:49:05.301"
}
},
{
@@ -1721,11 +1716,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.707"
+ "~create_time": "2023-05-08 17:49:05.294"
}
},
{
@@ -1735,11 +1729,10 @@
"cardinality": "SINGLE",
"aggregate_type": "NONE",
"write_type": "OLTP",
- "properties": [
- ],
+ "properties": [],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.609"
+ "~create_time": "2023-05-08 17:49:05.250"
}
}
],
@@ -1752,9 +1745,11 @@
"name"
],
"nullable_keys": [
- "age"
+ "age",
+ "city"
],
"index_labels": [
+ "personByAge",
"personByCity",
"personByAgeAndCity"
],
@@ -1767,19 +1762,15 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:40.783"
+ "~create_time": "2023-05-08 17:49:05.336"
}
},
{
"id": 2,
"name": "software",
- "id_strategy": "PRIMARY_KEY",
- "primary_keys": [
- "name"
- ],
- "nullable_keys": [
- "price"
- ],
+ "id_strategy": "CUSTOMIZE_NUMBER",
+ "primary_keys": [],
+ "nullable_keys": [],
"index_labels": [
"softwareByPrice"
],
@@ -1792,7 +1783,7 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:40.840"
+ "~create_time": "2023-05-08 17:49:05.347"
}
}
],
@@ -1802,13 +1793,9 @@
"name": "knows",
"source_label": "person",
"target_label": "person",
- "frequency": "MULTIPLE",
- "sort_keys": [
- "date"
- ],
- "nullable_keys": [
- "weight"
- ],
+ "frequency": "SINGLE",
+ "sort_keys": [],
+ "nullable_keys": [],
"index_labels": [
"knowsByWeight"
],
@@ -1820,7 +1807,7 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:41.840"
+ "~create_time": "2023-05-08 17:49:08.437"
}
},
{
@@ -1829,11 +1816,8 @@
"source_label": "person",
"target_label": "software",
"frequency": "SINGLE",
- "sort_keys": [
- ],
- "nullable_keys": [
- "weight"
- ],
+ "sort_keys": [],
+ "nullable_keys": [],
"index_labels": [
"createdByDate",
"createdByWeight"
@@ -1846,13 +1830,27 @@
"ttl": 0,
"enable_label_index": true,
"user_data": {
- "~create_time": "2021-09-03 15:13:41.868"
+ "~create_time": "2023-05-08 17:49:08.446"
}
}
],
"indexlabels": [
{
"id": 1,
+ "name": "personByAge",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "person",
+ "index_type": "RANGE_INT",
+ "fields": [
+ "age"
+ ],
+ "status": "CREATED",
+ "user_data": {
+ "~create_time": "2023-05-08 17:49:05.375"
+ }
+ },
+ {
+ "id": 2,
"name": "personByCity",
"base_type": "VERTEX_LABEL",
"base_value": "person",
@@ -1862,68 +1860,68 @@
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:40.886"
+ "~create_time": "2023-05-08 17:49:06.898"
}
},
{
- "id": 4,
- "name": "createdByDate",
- "base_type": "EDGE_LABEL",
- "base_value": "created",
+ "id": 3,
+ "name": "personByAgeAndCity",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "person",
"index_type": "SECONDARY",
"fields": [
- "date"
+ "age",
+ "city"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.878"
+ "~create_time": "2023-05-08 17:49:07.407"
}
},
{
- "id": 5,
- "name": "createdByWeight",
- "base_type": "EDGE_LABEL",
- "base_value": "created",
+ "id": 4,
+ "name": "softwareByPrice",
+ "base_type": "VERTEX_LABEL",
+ "base_value": "software",
"index_type": "RANGE_DOUBLE",
"fields": [
- "weight"
+ "price"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:42.117"
+ "~create_time": "2023-05-08 17:49:07.916"
}
},
{
- "id": 2,
- "name": "personByAgeAndCity",
- "base_type": "VERTEX_LABEL",
- "base_value": "person",
+ "id": 5,
+ "name": "createdByDate",
+ "base_type": "EDGE_LABEL",
+ "base_value": "created",
"index_type": "SECONDARY",
"fields": [
- "age",
- "city"
+ "date"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.351"
+ "~create_time": "2023-05-08 17:49:08.454"
}
},
{
- "id": 3,
- "name": "softwareByPrice",
- "base_type": "VERTEX_LABEL",
- "base_value": "software",
- "index_type": "RANGE_INT",
+ "id": 6,
+ "name": "createdByWeight",
+ "base_type": "EDGE_LABEL",
+ "base_value": "created",
+ "index_type": "RANGE_DOUBLE",
"fields": [
- "price"
+ "weight"
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:41.587"
+ "~create_time": "2023-05-08 17:49:08.963"
}
},
{
- "id": 6,
+ "id": 7,
"name": "knowsByWeight",
"base_type": "EDGE_LABEL",
"base_value": "knows",
@@ -1933,13 +1931,13 @@
],
"status": "CREATED",
"user_data": {
- "~create_time": "2021-09-03 15:13:42.376"
+ "~create_time": "2023-05-08 17:49:09.473"
}
}
]
}
-
Params说明:
请求体字段说明:
POST http://localhost:8080/graphs/hugegraph/schema/propertykeys
-
{
+
Params Description:
Request Body Field Description:
POST http://localhost:8080/graphs/hugegraph/schema/propertykeys
+
{
"name": "age",
"data_type": "INT",
"cardinality": "SINGLE"
@@ -1961,8 +1959,8 @@
},
"task_id": 0
}
-
append
(添加)和eliminate
(移除)PUT http://localhost:8080/graphs/hugegraph/schema/propertykeys/age?action=append
-
{
+
append
(add) and eliminate
(remove).PUT http://localhost:8080/graphs/hugegraph/schema/propertykeys/age?action=append
+
{
"name": "age",
"user_data": {
"min": 0,
@@ -1988,8 +1986,8 @@
},
"task_id": 0
}
-
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys
-
200
+
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys
+
200
{
"propertykeys": [
{
@@ -2050,8 +2048,8 @@
}
]
}
-
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
-
其中,age
为要获取的PropertyKey的名字
200
+
GET http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
+
Where age
is the name of the PropertyKey to be retrieved.
200
{
"id": 1,
"name": "age",
@@ -2067,13 +2065,13 @@
"~create_time": "2022-05-13 13:47:23.745"
}
}
-
DELETE http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
-
其中,age
为要获取的PropertyKey的名字
202
+
DELETE http://localhost:8080/graphs/hugegraph/schema/propertykeys/age
+
Where age
is the name of the PropertyKey to be deleted.
202
{
"task_id" : 0
}
-
假设已经创建好了1.1.3中列出来的 PropertyKeys
Params说明
POST http://localhost:8080/graphs/hugegraph/schema/vertexlabels
-
{
+
Assuming that the PropertyKeys listed in 1.1.3 have already been created.
Params Description:
POST http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+
{
"name": "person",
"id_strategy": "DEFAULT",
"properties": [
@@ -2105,7 +2103,7 @@
"enable_label_index": true,
"user_data": {}
}
-
从 hugegraph-server v0.11.2 版本开始支持顶点的 TTL 功能。顶点的 TTL 是通过 VertexLabel 来设置的。比如希望 person 类型的顶点存活时间为一天,需要在创建 person VertexLabel 的时候将 TTL 字段设置为 86400000,即单位为毫秒。
{
+
Starting from version v0.11.2, hugegraph-server supports Time-to-Live (TTL) functionality for vertices. The TTL for vertices is set through VertexLabel. For example, if you want the vertices of type “person” to have a lifespan of one day, you need to set the TTL field to 86400000 (in milliseconds) when creating the “person” VertexLabel.
{
"name": "person",
"id_strategy": "DEFAULT",
"properties": [
@@ -2119,7 +2117,7 @@
"ttl": 86400000,
"enable_label_index": true
}
-
另外,当顶点中带有"创建时间"的属性且希望以"创建时间"属性作为计算顶点存活时间的起点时,可以设置 VertexLabel 中的 ttl_start_time 字段。比如 person VertexLabel 有 createdTime 属性,且 createdTime 是 Date 类型的参数,希望 person 类型的顶点从创建开始存活一天的时间,那么创建 person VertexLabel 的 Request Body 如下:
{
+
Additionally, if the vertex has a property called “createdTime” and you want to use it as the starting point for calculating the vertex’s lifespan, you can set the ttl_start_time field in the VertexLabel. For example, if the “person” VertexLabel has a property called “createdTime” of type Date, and you want the vertices of type “person” to live for one day starting from the creation time, the Request Body for creating the “person” VertexLabel would be as follows:
{
"name": "person",
"id_strategy": "DEFAULT",
"properties": [
@@ -2135,8 +2133,8 @@
"ttl_start_time": "createdTime",
"enable_label_index": true
}
-
append
(添加)和eliminate
(移除)PUT http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person?action=append
-
{
+
append
(add) and eliminate
(remove).PUT http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person?action=append
+
{
"name": "person",
"properties": [
"city"
@@ -2169,8 +2167,8 @@
"super": "animal"
}
}
-
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
-
200
+
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
+
200
{
"vertexlabels": [
{
@@ -2217,8 +2215,8 @@
}
]
}
-
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
-
200
+
GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
+
200
{
"id": 1,
"primary_keys": [
@@ -2241,13 +2239,13 @@
"super": "animal"
}
}
-
删除 VertexLabel 会导致删除对应的顶点以及相关的索引数据,会产生一个异步任务
DELETE http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
-
202
+
Deleting a VertexLabel will result in the removal of corresponding vertices and related index data. This operation will generate an asynchronous task.
DELETE http://localhost:8080/graphs/hugegraph/schema/vertexlabels/person
+
202
{
"task_id": 1
}
-
注:
可以通过
GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
假设已经创建好了1.2.3中的 PropertyKeys 和 1.3.3中的 VertexLabels
Params说明
POST http://localhost:8080/graphs/hugegraph/schema/edgelabels
-
{
+
Note:
You can use
GET http://localhost:8080/graphs/hugegraph/tasks/1
(where “1” is the task_id) to query the execution status of the asynchronous task. For more information, refer to the Asynchronous Task RESTful API.
Assuming PropertyKeys from version 1.2.3 and VertexLabels from version 1.3.3 have already been created.
Params Explanation
POST http://localhost:8080/graphs/hugegraph/schema/edgelabels
+
{
"name": "created",
"source_label": "person",
"target_label": "software",
@@ -2278,7 +2276,7 @@
"enable_label_index": true,
"user_data": {}
}
-
从 hugegraph-server v0.11.2 版本开始支持边的 TTL 功能。边的 TTL 是通过 EdgeLabel 来设置的。比如希望 knows 类型的边存活时间为一天,需要在创建 knows EdgeLabel 的时候将 TTL 字段设置为 86400000,即单位为毫秒。
{
+
Starting from version 0.11.2 of hugegraph-server, the TTL (Time to Live) feature for edges is supported. The TTL for edges is set through EdgeLabel. For example, if you want the “knows” type of edge to have a lifespan of one day, you need to set the TTL field to 86400000 when creating the “knows” EdgeLabel, where the unit is milliseconds.
{
"id": 1,
"sort_keys": [
],
@@ -2298,7 +2296,7 @@
"ttl": 86400000,
"user_data": {}
}
-
另外,当边中带有"创建时间"的属性且希望以"创建时间"属性作为计算边存活时间的起点时,可以设置 EdgeLabel 中的 ttl_start_time 字段。比如 knows EdgeLabel 有 createdTime 属性,且 createdTime 是 Date 类型的参数,希望 knows 类型的边从创建开始存活一天的时间,那么创建 knows EdgeLabel 的 Request Body 如下:
{
+
Additionally, when the edge has a property called “createdTime” and you want to use the “createdTime” property as the starting point for calculating the edge’s lifespan, you can set the ttl_start_time field in the EdgeLabel. For example, if the knows EdgeLabel has a property called “createdTime” which is of type Date, and you want the “knows” type of edge to live for one day from the time of creation, the Request Body for creating the knows EdgeLabel would be as follows:
{
"id": 1,
"sort_keys": [
],
@@ -2319,8 +2317,8 @@
"ttl_start_time": "createdTime",
"user_data": {}
}
-
append
(添加)和eliminate
(移除)PUT http://localhost:8080/graphs/hugegraph/schema/edgelabels/created?action=append
-
{
+
append
(add) and eliminate
(remove).PUT http://localhost:8080/graphs/hugegraph/schema/edgelabels/created?action=append
+
{
"name": "created",
"properties": [
"weight"
@@ -2350,8 +2348,8 @@
"enable_label_index": true,
"user_data": {}
}
-
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels
-
200
+
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels
+
200
{
"edgelabels": [
{
@@ -2395,8 +2393,8 @@
}
]
}
-
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
-
200
+
GET http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
+
200
{
"id": 1,
"sort_keys": [
@@ -2419,13 +2417,13 @@
"enable_label_index": true,
"user_data": {}
}
-
删除 EdgeLabel 会导致删除对应的边以及相关的索引数据,会产生一个异步任务
DELETE http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
-
202
+
Deleting an EdgeLabel will result in the deletion of corresponding edges and related index data. This operation will generate an asynchronous task.
DELETE http://localhost:8080/graphs/hugegraph/schema/edgelabels/created
+
202
{
"task_id": 1
}
-
注:
可以通过
GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
假设已经创建好了1.1.3中的 PropertyKeys 、1.2.3中的 VertexLabels 以及 1.3.3中的 EdgeLabels
POST http://localhost:8080/graphs/hugegraph/schema/indexlabels
-
{
+
Note:
You can query the execution status of an asynchronous task by using
GET http://localhost:8080/graphs/hugegraph/tasks/1
(where “1” is the task_id). For more information, refer to the Asynchronous Task RESTful API.
Assuming PropertyKeys from version 1.1.3, VertexLabels from version 1.2.3, and EdgeLabels from version 1.3.3 have already been created.
POST http://localhost:8080/graphs/hugegraph/schema/indexlabels
+
{
"name": "personByCity",
"base_type": "VERTEX_LABEL",
"base_value": "person",
@@ -2448,8 +2446,8 @@
},
"task_id": 2
}
-
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels
-
200
+
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels
+
200
{
"indexlabels": [
{
@@ -2495,8 +2493,8 @@
}
]
}
-
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
-
200
+
GET http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
+
200
{
"id": 1,
"base_type": "VERTEX_LABEL",
@@ -2507,28 +2505,28 @@
],
"index_type": "SECONDARY"
}
-
删除 IndexLabel 会导致删除相关的索引数据,会产生一个异步任务
DELETE http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
-
202
+
Deleting an IndexLabel will result in the deletion of related index data. This operation will generate an asynchronous task.
DELETE http://localhost:8080/graphs/hugegraph/schema/indexlabels/personByCity
+
202
{
"task_id": 1
}
-
注:
可以通过
GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/indexlabels/personByCity
-
202
+
Note:
You can query the execution status of an asynchronous task by using
GET http://localhost:8080/graphs/hugegraph/tasks/1
(where “1” is the task_id). For more information, refer to the Asynchronous Task RESTful API.
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/indexlabels/personByCity
+
202
{
"task_id": 1
}
-
Note:
You can get the asynchronous job status by
GET http://localhost:8080/graphs/hugegraph/tasks/${task_id}
(the task_id here should be 1). See More AsyncJob RESTfull API
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/vertexlabels/person
-
202
+
Note:
You can get the asynchronous job status by
GET http://localhost:8080/graphs/hugegraph/tasks/${task_id}
(the task_id here should be 1). See More AsyncJob RESTfull API
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/vertexlabels/person
+
202
{
"task_id": 2
}
-
Note:
You can get the asynchronous job status by
GET http://localhost:8080/graphs/hugegraph/tasks/${task_id}
(the task_id here should be 2). See More AsyncJob RESTfull API
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/edgelabels/created
-
202
+
Note:
You can get the asynchronous job status by
GET http://localhost:8080/graphs/hugegraph/tasks/${task_id}
(the task_id here should be 2). See More AsyncJob RESTfull API
PUT http://localhost:8080/graphs/hugegraph/jobs/rebuild/edgelabels/created
+
202
{
"task_id": 3
}
-
Note:
You can get the asynchronous job status by
GET http://localhost:8080/graphs/hugegraph/tasks/${task_id}
(the task_id here should be 3). See More AsyncJob RESTfull API
顶点类型中的 Id 策略决定了顶点的 Id 类型,其对应关系如下:
Id_Strategy | id type |
---|---|
AUTOMATIC | number |
PRIMARY_KEY | string |
CUSTOMIZE_STRING | string |
CUSTOMIZE_NUMBER | number |
CUSTOMIZE_UUID | uuid |
顶点的 GET/PUT/DELETE
API 中 url 的 id 部分传入的应是带有类型信息的 id 值,这个类型信息用 json 串是否带引号表示,也就是说:
接下来的示例均假设已经创建好了前述的各种 schema 信息
POST http://localhost:8080/graphs/hugegraph/graph/vertices
-
{
+
Note:
You can get the asynchronous job status by
GET http://localhost:8080/graphs/hugegraph/tasks/${task_id}
(the task_id here should be 3). See More AsyncJob RESTfull API
In vertex types, the Id strategy determines the type of the vertex Id, with the corresponding relationships as follows:
Id_Strategy | id type |
---|---|
AUTOMATIC | number |
PRIMARY_KEY | string |
CUSTOMIZE_STRING | string |
CUSTOMIZE_NUMBER | number |
CUSTOMIZE_UUID | uuid |
For the GET/PUT/DELETE
API of a vertex, the id part in the URL should be passed as the id value with type information. This type information is indicated by whether the JSON string is enclosed in quotes, meaning:
The following examples assume that the aforementioned schema information has been created.
POST http://localhost:8080/graphs/hugegraph/graph/vertices
+
{
"label": "person",
"properties": {
"name": "marko",
@@ -2555,8 +2553,8 @@
]
}
}
-
POST http://localhost:8080/graphs/hugegraph/graph/vertices/batch
-
[
+
POST http://localhost:8080/graphs/hugegraph/graph/vertices/batch
+
[
{
"label": "person",
"properties": {
@@ -2578,15 +2576,15 @@
"1:marko",
"2:ripple"
]
-
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=append
-
{
+
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=append
+
{
"label": "person",
"properties": {
"age": 30,
"city": "Beijing"
}
}
-
注意:属性的取值是有三种类别的,分别是single、set和list。如果是single,表示增加或更新属性值;如果是set或list,则表示追加属性值。
200
+
Note: There are three categories for property values: single, set, and list. If it is single, it means adding or updating the property value. If it is set or list, it means appending the property value.
200
{
"id": "1:marko",
"label": "person",
@@ -2612,7 +2610,7 @@
]
}
}
-
批量更新顶点的属性,并支持多种更新策略,包括
假设原顶点及属性为:
{
+
Batch update properties of vertices and support various update strategies, including:
Assuming the original vertex and properties are:
{
"vertices":[
{
"id":"2:lop",
@@ -2641,8 +2639,8 @@
}
]
}
-
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/batch
-
{
+
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/batch
+
{
"vertices":[
{
"label":"software",
@@ -2706,14 +2704,14 @@
}
]
}
-
结果分析:
其他的更新策略使用方式可以类推,不再赘述。
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=eliminate
-
{
+
Result Analysis:
The usage of other update strategies can be inferred in a similar manner and will not be further elaborated.
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/vertices/"1:marko"?action=eliminate
+
{
"label": "person",
"properties": {
"city": "Beijing"
}
}
-
注意:这里会直接删除属性(删除key和所有value),无论其属性的取值是single、set或list。
200
+
Note: Here, the properties (keys and all values) will be directly deleted, regardless of whether the property values are single, set, or list.
200
{
"id": "1:marko",
"label": "person",
@@ -2733,8 +2731,8 @@
]
}
}
-
以上参数都是可选的,如果提供page参数,必须提供limit参数,不允许带其他参数。label, properties
和limit
可以任意组合。
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配、范围匹配和模糊匹配,精确匹配时形如properties={"age":29}
,范围匹配时形如properties={"age":"P.gt(29)"}
,模糊匹配时形如properties={"city": "P.textcontains("ChengDu China")},
范围匹配支持的表达式如下:
表达式 | 说明 |
---|---|
P.eq(number) | 属性值等于number的顶点 |
P.neq(number) | 属性值不等于number的顶点 |
P.lt(number) | 属性值小于number的顶点 |
P.lte(number) | 属性值小于等于number的顶点 |
P.gt(number) | 属性值大于number的顶点 |
P.gte(number) | 属性值大于等于number的顶点 |
P.between(number1,number2) | 属性值大于等于number1且小于number2的顶点 |
P.inside(number1,number2) | 属性值大于number1且小于number2的顶点 |
P.outside(number1,number2) | 属性值小于number1且大于number2的顶点 |
P.within(value1,value2,value3,…) | 属性值等于任何一个给定value的顶点 |
查询所有 age 为 20 且 label 为 person 的顶点
GET http://localhost:8080/graphs/hugegraph/graph/vertices?label=person&properties={"age":29}&limit=1
-
200
+
All of the above parameters are optional. If the page
parameter is provided, the limit
parameter must also be provided, and no other parameters are allowed. label, properties
, and limit
can be combined in any way.
Property key-value pairs consist of the property name and value in JSON format. Multiple property key-value pairs are allowed as query conditions. The property value supports exact matching, range matching, and fuzzy matching. For exact matching, use the format properties={"age":29}
, for range matching, use the format properties={"age":"P.gt(29)"}
, and for fuzzy matching, use the format properties={"city": "P.textcontains("ChengDu China")}
. The following expressions are supported for range matching:
Expression | Explanation |
---|---|
P.eq(number) | Vertices with property value equal to number |
P.neq(number) | Vertices with property value not equal to number |
P.lt(number) | Vertices with property value less than number |
P.lte(number) | Vertices with property value less than or equal to number |
P.gt(number) | Vertices with property value greater than number |
P.gte(number) | Vertices with property value greater than or equal to number |
P.between(number1,number2) | Vertices with property value greater than or equal to number1 and less than number2 |
P.inside(number1,number2) | Vertices with property value greater than number1 and less than number2 |
P.outside(number1,number2) | Vertices with property value less than number1 and greater than number2 |
P.within(value1,value2,value3,…) | Vertices with property value equal to any of the given values |
Query all vertices with age 20 and label person
GET http://localhost:8080/graphs/hugegraph/graph/vertices?label=person&properties={"age":29}&limit=1
+
200
{
"vertices": [
{
@@ -2764,8 +2762,8 @@
}
]
}
-
分页查询所有顶点,获取第一页(page不带参数值),限定3条
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3
-
200
+
Paginate through all vertices, retrieve the first page (page without parameter value), limited to 3 records
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3
+
200
{
"vertices": [{
"id": "2:ripple",
@@ -2827,9 +2825,8 @@
],
"page": "001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004"
}
-
返回的body里面是带有下一页的页号信息的,"page": "001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004"
,
-在查询下一页的时候将该值赋给page参数。
分页查询所有顶点,获取下一页(page带上上一页返回的page值),限定3条
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page=001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004&limit=3
-
200
+
The returned body contains information about the page number of the next page, "page": "001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004"
. When querying the next page, assign this value to the page
parameter.
Paginate and retrieve all vertices, including the next page (passing the page
value returned from the previous page), limited to 3 items.
GET http://localhost:8080/graphs/hugegraph/graph/vertices?page=001000100853313a706574657200f07ffffffc00e797c6349be736fffc8699e8a502efe10004&limit=3
+
200
{
"vertices": [{
"id": "1:josh",
@@ -2891,8 +2888,8 @@
],
"page": null
}
-
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
GET http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
-
200
+
At this point, "page": null
indicates that there are no more pages available. (Note: When using Cassandra as the backend for performance reasons, if the returned page happens to be the last page, the page
value may not be empty. When requesting the next page using that page
value, it will return empty data
and page = null
. The same applies to other similar situations.)
GET http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
+
200
{
"id": "1:marko",
"label": "person",
@@ -2912,13 +2909,12 @@
]
}
}
-
仅根据Id删除顶点
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
-
204
-
根据Label+Id删除顶点
通过指定Label参数和Id来删除顶点时,一般来说其性能比仅根据Id删除会更好。
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
-
204
-
顶点 id 格式的修改也影响到了边的 Id 以及源顶点和目标顶点 id 的格式。
EdgeId是由 src-vertex-id + direction + label + sort-values + tgt-vertex-id
拼接而成,
-但是这里的顶点id类型不是通过引号区分的,而是根据前缀区分:
L
,形如 “L123456>1»L987654”S
,形如 “S1:peter>1»S2:lop”接下来的示例均假设已经创建好了前述的各种schema和vertex信息
Params说明
POST http://localhost:8080/graphs/hugegraph/graph/edges
-
{
+
Delete the vertex based on ID only.
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"
+
204
+
Delete Vertex by Label+ID
When deleting a vertex by specifying both the Label parameter and the ID, it generally offers better performance compared to deleting by ID alone.
DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
+
204
+
The modification of the vertex ID format also affects the ID of the edge, as well as the formats of the source vertex and target vertex IDs.
The EdgeId is formed by concatenating src-vertex-id + direction + label + sort-values + tgt-vertex-id
, but the vertex ID types are not distinguished by quotation marks here. Instead, they are distinguished by prefixes:
L
, like “L123456>1»L987654”.S
, like “S1:peter>1»S2:lop”.The following examples assume that various schemas and vertex information mentioned above have been created.
Params Explanation
POST http://localhost:8080/graphs/hugegraph/graph/edges
+
{
"label": "created",
"outV": "1:peter",
"inV": "2:lop",
@@ -2943,8 +2939,8 @@
"weight": 0.2
}
}
-
POST http://localhost:8080/graphs/hugegraph/graph/edges/batch
-
[
+
POST http://localhost:8080/graphs/hugegraph/graph/edges/batch
+
[
{
"label": "created",
"outV": "1:peter",
@@ -2973,13 +2969,13 @@
"S1:peter>1>>S2:lop",
"S1:marko>2>>S1:vadas"
]
-
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=append
-
{
+
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=append
+
{
"properties": {
"weight": 1.0
}
}
-
注意:属性的取值是有三种类别的,分别是single、set和list。如果是single,表示增加或更新属性值;如果是set或list,则表示追加属性值。
200
+
Note: There are three categories of property values: single, set, and list. If it is single, it means adding or updating the property value. If it is set or list, it means appending the property value.
200
{
"id": "S1:peter>1>>S2:lop",
"label": "created",
@@ -2993,7 +2989,7 @@
"weight": 1
}
}
-
与批量更新顶点属性类似
假设原边及属性为:
{
+
Similar to batch updating vertex properties.
Assuming the original edge and its properties are:
{
"edges":[
{
"id":"S1:josh>2>>S2:ripple",
@@ -3023,8 +3019,8 @@
}
]
}
-
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/edges/batch
-
{
+
PUT http://127.0.0.1:8080/graphs/hugegraph/graph/edges/batch
+
{
"edges":[
{
"id":"S1:josh>2>>S2:ripple",
@@ -3089,13 +3085,13 @@
}
]
}
-
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=eliminate
-
{
+
PUT http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?action=eliminate
+
{
"properties": {
"weight": 1.0
}
}
-
注意:这里会直接删除属性(删除key和所有value),无论其属性的取值是single、set或list。
200
+
Note: This will directly delete the properties (removing the key and all values), regardless of whether the property values are single, set, or list.
200
{
"id": "S1:peter>1>>S2:lop",
"label": "created",
@@ -3108,9 +3104,8 @@
"date": "20170324"
}
}
-
支持的查询有以下几种:
属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配、范围匹配和模糊匹配,精确匹配时形如properties={"weight":0.8}
,范围匹配时形如properties={"age":"P.gt(0.8)"}
,模糊匹配时形如properties={"city": "P.textcontains("ChengDu China")}
,范围匹配支持的表达式如下:
表达式 | 说明 |
---|---|
P.eq(number) | 属性值等于number的边 |
P.neq(number) | 属性值不等于number的边 |
P.lt(number) | 属性值小于number的边 |
P.lte(number) | 属性值小于等于number的边 |
P.gt(number) | 属性值大于number的边 |
P.gte(number) | 属性值大于等于number的边 |
P.between(number1,number2) | 属性值大于等于number1且小于number2的边 |
P.inside(number1,number2) | 属性值大于number1且小于number2的边 |
P.outside(number1,number2) | 属性值小于number1且大于number2的边 |
P.within(value1,value2,value3,…) | 属性值等于任何一个给定value的边 |
查询与顶点 person:josh(vertex_id=“1:josh”) 相连且 label 为 created 的边
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?vertex_id="1:josh"&direction=BOTH&label=created&properties={}
-
200
+
The supported query options are as follows:
Property key-value pairs consist of the attribute name and attribute value in JSON format. Multiple property key-value pairs are allowed as query conditions. The attribute value supports exact matching, range matching, and fuzzy matching. For exact matching, it is in the form properties={"weight": 0.8}
. For range matching, it is in the form properties={"age": "P.gt(0.8)"}
. For fuzzy matching, it is in the form properties={"city": "P.textcontains("ChengDu China")}
. The supported expressions for range matching are as follows:
Expression | Description |
---|---|
P.eq(number) | Edges with attribute value equal to number |
P.neq(number) | Edges with attribute value not equal to number |
P.lt(number) | Edges with attribute value less than number |
P.lte(number) | Edges with attribute value less than or equal to number |
P.gt(number) | Edges with attribute value greater than number |
P.gte(number) | Edges with attribute value greater than or equal to number |
P.between(number1, number2) | Edges with attribute value greater than or equal to number1 and less than number2 |
P.inside(number1, number2) | Edges with attribute value greater than number1 and less than number2 |
P.outside(number1, number2) | Edges with attribute value less than number1 and greater than number2 |
P.within(value1, value2, value3, …) | Edges with attribute value equal to any of the given values |
Query for edges connected to vertex person:josh (vertex_id=“1:josh”) with label created
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?vertex_id="1:josh"&direction=BOTH&label=created&properties={}
+
200
{
"edges": [
{
@@ -3141,8 +3136,8 @@
}
]
}
-
分页查询所有边,获取第一页(page不带参数值),限定3条
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page&limit=3
-
200
+
Paginated query for all edges, fetching the first page (page without a parameter value), limited to 3 records
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page&limit=3
+
200
{
"edges": [{
"id": "S1:peter>2>>S2:lop",
@@ -3186,9 +3181,8 @@
],
"page": "002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004"
}
-
返回的body里面是带有下一页的页号信息的,"page": "002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004"
,
-在查询下一页的时候将该值赋给page参数。
分页查询所有边,获取下一页(page带上上一页返回的page值),限定3条
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page=002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004&limit=3
-
200
+
The returned body contains the information of the next page, "page": "002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004"
. When querying the next page, assign this value to the page parameter.
Paginated query for all edges, fetching the next page (page with the value returned from the previous page), limited to 3 records
GET http://127.0.0.1:8080/graphs/hugegraph/graph/edges?page=002500100753313a6a6f73681210010004000000020953323a726970706c65f07ffffffcf07ffffffd8460d63f4b398dd2721ed4fdb7716b420004&limit=3
+
200
{
"edges": [{
"id": "S1:marko>1>20130220>S1:josh",
@@ -3232,8 +3226,8 @@
],
"page": null
}
-
此时"page": null
表示已经没有下一页了 (注: 后端为 Cassandra 时,为了性能考虑,返回页恰好为最后一页时,返回 page
值可能非空,通过该 page
再请求下一页数据时则返回 空数据
及 page = null
,其他情况类似)
GET http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
-
200
+
When "page": null
is returned, it indicates that there are no more pages available. (Note: When the backend is Cassandra, for performance considerations, if the returned page happens to be the last page, the page
value may not be empty. When requesting the next page data using that page
value, it will return empty data
and page = null
. Similar situations apply for other cases.)
GET http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
+
200
{
"id": "S1:peter>1>>S2:lop",
"label": "created",
@@ -3247,11 +3241,11 @@
"weight": 0.2
}
}
-
仅根据Id删除边
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
-
204
-
根据Label+Id删除边
通过指定Label参数和Id来删除边时,一般来说其性能比仅根据Id删除会更好。
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
-
204
-
HugeGraphServer为HugeGraph图数据库提供了RESTful API接口。除了顶点和边的CRUD基本操作以外,还提供了一些遍历(traverser)方法,我们称为traverser API
。这些遍历方法实现了一些复杂的图算法,方便用户对图进行分析和挖掘。
HugeGraph支持的Traverser API包括:
使用方法中的例子,都是基于TinkerPop官网给出的图:
数据导入程序如下:
public class Loader {
+
Deleting Edge by ID only
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop
+
204
+
Deleting Edge by Label+ID
In general, specifying the Label parameter along with the ID to delete an edge will provide better performance compared to deleting by ID only.
DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
+
204
+
HugeGraphServer provides a RESTful API interface for the HugeGraph graph database. In addition to the basic CRUD operations for vertices and edges, it also offers several traversal methods, which we refer to as the traverser API
. These traversal methods implement various complex graph algorithms, making it convenient for users to analyze and explore the graph.
The Traverser API supported by HugeGraph includes:
In the following, we provide a detailed explanation of the Traverser API:
The usage examples provided in this section are based on the graph presented on the TinkerPop official website:
The data import program is as follows:
public class Loader {
public static void main(String[] args) {
HugeClient client = new HugeClient("http://127.0.0.1:8080", "hugegraph");
SchemaManager schema = client.schema();
@@ -3358,28 +3352,28 @@
peter.addEdge("created", lop, "date", "20170324", "weight", 0.2);
}
}
-
顶点ID为:
"2:ripple",
-"1:vadas",
-"1:peter",
-"1:josh",
-"1:marko",
-"2:lop"
-
边ID为:
"S1:peter>2>>S2:lop",
-"S1:josh>2>>S2:lop",
-"S1:josh>2>>S2:ripple",
-"S1:marko>1>20130220>S1:josh",
-"S1:marko>1>20160110>S1:vadas",
-"S1:marko>2>>S2:lop"
-
根据起始顶点、方向、边的类型(可选)和深度depth,查找从起始顶点出发恰好depth步可达的顶点
GET http://localhost:8080/graphs/{graph}/traversers/kout?source="1:marko"&max_depth=2
-
200
+
The vertex IDs are:
"2:ripple",
+"1:vadas",
+"1:peter",
+"1:josh",
+"1:marko",
+"2:lop"
+
The edge IDs are:
"S1:peter>2>>S2:lop",
+"S1:josh>2>>S2:lop",
+"S1:josh>2>>S2:ripple",
+"S1:marko>1>20130220>S1:josh",
+"S1:marko>1>20160110>S1:vadas",
+"S1:marko>2>>S2:lop"
+
The K-out API allows you to find vertices that are exactly “depth” steps away from a given starting vertex, considering the specified direction, edge type (optional), and depth.
GET http://localhost:8080/graphs/{graph}/traversers/kout?source="1:marko"&max_depth=2
+
200
{
"vertices":[
"2:ripple",
"1:peter"
]
}
-
查找恰好N步关系可达的顶点。两个例子:
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发恰好depth步可达的顶点。
与K-out基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)POST http://localhost:8080/graphs/{graph}/traversers/kout
-
{
+
Finding vertices that are exactly N steps away in a relationship. Two examples:
The K-out API allows you to find vertices that are exactly “depth” steps away from a given starting vertex, considering the specified steps (including direction, edge type, and attribute filtering).
The advanced version differs from the basic version of K-out API in the following aspects:
- Supports counting the number of neighbors only
- Supports edge attribute filtering
- Supports returning the shortest path to the neighbor
skip_degree >= max_degree
. Default is 0 (not enabled), indicating no skipping of any vertices (Note: Enabling this configuration means that during traversal, an attempt will be made to access skip_degree edges of a vertex, not just max_degree edges. This incurs additional traversal overhead and may have a significant impact on query performance. Please enable it only after understanding the implications).POST http://localhost:8080/graphs/{graph}/traversers/kout
+
{
"source": "1:marko",
"step": {
"direction": "BOTH",
@@ -3467,8 +3461,8 @@
}
]
}
-
参见3.2.1.3
根据起始顶点、方向、边的类型(可选)和深度depth,查找包括起始顶点在内、depth步之内可达的所有顶点
相当于:起始顶点、K-out(1)、K-out(2)、… 、K-out(max_depth)的并集
GET http://localhost:8080/graphs/{graph}/traversers/kneighbor?source=“1:marko”&max_depth=2
-
200
+
Refer to 3.2.1.3.
Find all vertices that are reachable within depth steps, including the starting vertex, based on the starting vertex, direction, edge type (optional), and depth.
Equivalent to the union of: starting vertex, K-out(1), K-out(2), …, K-out(max_depth).
GET http://localhost:8080/graphs/{graph}/traversers/kneighbor?source=“1:marko”&max_depth=2
+
200
{
"vertices":[
"2:ripple",
@@ -3479,8 +3473,8 @@
"2:lop"
]
}
-
查找N步以内可达的所有顶点,例如:
根据起始顶点、步骤(包括方向、边类型和过滤属性)和深度depth,查找从起始顶点出发depth步内可达的所有顶点。
与K-neighbor基础版的不同在于:
- 支持只统计邻居数量
- 支持边属性过滤
- 支持返回到达邻居的最短路径
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)POST http://localhost:8080/graphs/{graph}/traversers/kneighbor
-
{
+
Find all vertices reachable within N steps, for example:
Find all vertices that are reachable within depth steps from the starting vertex, based on the starting vertex, steps (including direction, edge type, and filter properties), and depth.
The difference from the Basic Version of K-neighbor API is that:
- It supports counting the number of neighbors only.
- It supports filtering edges based on their properties.
- It supports returning the shortest path to reach the neighbors.
skip_degree >= max_degree
. Default is 0 (not enabled), which means no vertices are skipped. (Note: When this configuration is enabled, the traversal will attempt to access skip_degree edges for each vertex, not just max_degree edges. This incurs additional traversal overhead and may significantly impact query performance. Please make sure to understand this before enabling.)POST http://localhost:8080/graphs/{graph}/traversers/kneighbor
+
{
"source": "1:marko",
"step": {
"direction": "BOTH",
@@ -3609,20 +3603,20 @@
}
]
}
-
参见3.2.3.3
查询两个点的共同邻居
GET http://localhost:8080/graphs/{graph}/traversers/sameneighbors?vertex=“1:marko”&other="1:josh"
-
200
+
See 3.2.3.3
Retrieve the common neighbors of two vertices.
GET http://localhost:8080/graphs/{graph}/traversers/sameneighbors?vertex=“1:marko”&other="1:josh"
+
200
{
"same_neighbors":[
"2:lop"
]
}
-
查找两个顶点的共同邻居:
计算两个顶点的jaccard similarity(两个顶点邻居的交集比上两个顶点邻居的并集)
GET http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity?vertex="1:marko"&other="1:josh"
-
200
+
Find the common neighbors of two vertices:
Compute the Jaccard similarity between two vertices (the intersection of the neighbors of the two vertices divided by the union of the neighbors of the two vertices).
GET http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity?vertex="1:marko"&other="1:josh"
+
200
{
"jaccard_similarity": 0.2
}
-
用于评估两个点的相似性或者紧密度
计算与指定顶点的jaccard similarity最大的N个点
jaccard similarity的计算方式为:两个顶点邻居的交集比上两个顶点邻居的并集
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)POST http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity
-
{
+
Used to evaluate the similarity or closeness between two vertices.
Compute the N vertices with the highest Jaccard similarity to a specified vertex.
The Jaccard similarity is calculated as the intersection of the neighbors of the two vertices divided by the union of the neighbors of the two vertices.
POST http://localhost:8080/graphs/{graph}/traversers/jaccardsimilarity
+
{
"vertex": "1:marko",
"step": {
"direction": "BOTH",
@@ -3638,8 +3632,8 @@
"1:peter": 0.3333333333333333,
"1:josh": 0.2
}
-
用于在图中找出与指定顶点相似性最高的顶点
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条最短路径
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)GET http://localhost:8080/graphs/{graph}/traversers/shortestpath?source="1:marko"&target="2:ripple"&max_depth=3
-
200
+
Used to find the vertices in the graph that have the highest similarity to a specified vertex.
Find the shortest path between a starting vertex and a target vertex based on the direction, edge type (optional), and maximum depth.
GET http://localhost:8080/graphs/{graph}/traversers/shortestpath?source="1:marko"&target="2:ripple"&max_depth=3
+
200
{
"path":[
"1:marko",
@@ -3647,8 +3641,8 @@
"2:ripple"
]
}
-
查找两个顶点间的最短路径,例如:
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找两点间所有的最短路径
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)GET http://localhost:8080/graphs/{graph}/traversers/allshortestpaths?source="A"&target="Z"&max_depth=10
-
200
+
Used to find the shortest path between two vertices, for example:
Find all shortest paths between a starting vertex and a target vertex based on the direction, edge type (optional), and maximum depth.
GET http://localhost:8080/graphs/{graph}/traversers/allshortestpaths?source="A"&target="Z"&max_depth=10
+
200
{
"paths":[
{
@@ -3669,8 +3663,8 @@
}
]
}
-
查找两个顶点间的所有最短路径,例如:
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度,查找一条带权最短路径
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)GET http://localhost:8080/graphs/{graph}/traversers/weightedshortestpath?source="1:marko"&target="2:ripple"&weight="weight"&with_vertex=true
-
200
+
Used to find all shortest paths between two vertices, for example:
Find a weighted shortest path between a starting vertex and a target vertex based on the direction, edge type (optional), maximum depth, and edge weight property.
GET http://localhost:8080/graphs/{graph}/traversers/weightedshortestpath?source="1:marko"&target="2:ripple"&weight="weight"&with_vertex=true
+
200
{
"path": {
"weight": 2.0,
@@ -3713,8 +3707,8 @@
}
]
}
-
查找两个顶点间的带权最短路径,例如:
从一个顶点出发,查找该点到图中其他顶点的最短路径(可选是否带权重)
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)GET http://localhost:8080/graphs/{graph}/traversers/singlesourceshortestpath?source="1:marko"&with_vertex=true
-
200
+
Used to find the weighted shortest path between two vertices, for example:
Starting from a vertex, find the shortest paths from that vertex to other vertices in the graph (optional with weight).
GET http://localhost:8080/graphs/{graph}/traversers/singlesourceshortestpath?source="1:marko"&with_vertex=true
+
200
{
"paths": {
"2:ripple": {
@@ -3818,8 +3812,8 @@
}
]
}
-
查找从一个点出发到其他顶点的带权最短路径,比如:
查找指定顶点集两两之间的最短路径
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)POST http://localhost:8080/graphs/{graph}/traversers/multinodeshortestpath
-
{
+
Used to find the weighted shortest path from one vertex to other vertices, for example:
Finds the shortest paths between pairs of specified vertices.
Note: Property values in properties can be a list, indicating that the value of the key can be any value in the list.
POST http://localhost:8080/graphs/{graph}/traversers/multinodeshortestpath
+
{
"vertices": {
"ids": ["382:marko", "382:josh", "382:vadas", "382:peter", "383:lop", "383:ripple"]
},
@@ -4001,8 +3995,8 @@
}
]
}
-
查找多个点之间的最短路径,比如:
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找所有路径
GET http://localhost:8080/graphs/{graph}/traversers/paths?source="1:marko"&target="1:josh"&max_depth=5
-
200
+
Used to find the shortest paths between multiple vertices, for example:
Finds all paths based on conditions such as the starting vertex, destination vertex, direction, edge types (optional), and maximum depth.
GET http://localhost:8080/graphs/{graph}/traversers/paths?source="1:marko"&target="1:josh"&max_depth=5
+
200
{
"paths":[
{
@@ -4020,8 +4014,8 @@
}
]
}
-
查找两个顶点间的所有路径,例如:
根据起始顶点、目的顶点、步骤(step)和最大深度等条件查找所有路径
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)POST http://localhost:8080/graphs/{graph}/traversers/paths
-
{
+
Used to find all paths between two vertices, for example:
Finds all paths based on conditions such as the starting vertex, destination vertex, steps (step), and maximum depth.
Note: The property values in properties can be a list, indicating that any value corresponding to the key is acceptable.
Note: The property values in properties can be a list, indicating that any value corresponding to the key is acceptable.
skip_degree >= max_degree
. Default is 0 (not enabled), which means no points are skipped. (Note: When this configuration is enabled, the traversal will attempt to visit skip_degree edges of a vertex, not just max_degree edges. This incurs additional traversal overhead and may have a significant impact on query performance. Please make sure to understand before enabling it.)POST http://localhost:8080/graphs/{graph}/traversers/paths
+
{
"sources": {
"ids": ["1:marko"]
},
@@ -4059,8 +4053,8 @@
}
]
}
-
查找两个顶点间的所有路径,例如:
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
POST http://localhost:8080/graphs/{graph}/traversers/customizedpaths
-
{
+
Used to find all paths between two vertices, for example:
Finds all paths that meet the specified conditions based on a batch of starting vertices, edge rules (including direction, edge types, and property filters), and maximum depth.
Note: The property values in properties can be a list, indicating that any value corresponding to the key is acceptable.
POST http://localhost:8080/graphs/{graph}/traversers/customizedpaths
+
{
"sources":{
"ids":[
@@ -4187,8 +4181,8 @@
}
]
}
-
适合查找各种复杂的路径集合,例如:
根据一批起始顶点、边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)POST http://localhost:8080/graphs/{graph}/traversers/templatepaths
-
{
+
Suitable for finding various complex sets of paths, for example:
Finds all paths that meet the specified conditions based on a batch of starting vertices, edge rules (including direction, edge types, and property filters), and maximum depth.
Note: The property values in properties can be a list, indicating that any value corresponding to the key is acceptable.
Note: The property values in properties can be a list, indicating that any value corresponding to the key is acceptable.
skip_degree >= max_degree
constraint. Default is 0 (not enabled), which means no points are skipped. (Note: After enabling this configuration, traversing will attempt to access a vertex’s skip_degree edges, not just max_degree edges. This incurs additional traversal overhead and may have a significant impact on query performance. Please ensure understanding before enabling.)false.
POST http://localhost:8080/graphs/{graph}/traversers/templatepaths
+
{
"sources": {
"ids": [],
"label": "person",
@@ -4307,8 +4301,8 @@
}
]
}
-
适合查找各种复杂的模板路径,比如personA -(朋友)-> personB -(同学)-> personC,其中"朋友"和"同学"边可以分别是最多3层和4层的情况
根据起始顶点、目的顶点、方向、边的类型(可选)和最大深度等条件查找相交点
GET http://localhost:8080/graphs/{graph}/traversers/crosspoints?source="2:lop"&target="2:ripple"&max_depth=5&direction=IN
-
200
+
Suitable for finding various complex template paths, such as personA -(Friend)-> personB -(Classmate)-> personC, where the “Friend” and “Classmate” edges can have a maximum depth of 3 and 4 layers, respectively.
Finds the intersection points based on the specified conditions, including starting vertices, destination vertices, direction, edge types (optional), and maximum depth.
GET http://localhost:8080/graphs/{graph}/traversers/crosspoints?source="2:lop"&target="2:ripple"&max_depth=5&direction=IN
+
200
{
"crosspoints":[
{
@@ -4321,8 +4315,8 @@
}
]
}
-
查找两个顶点的交点及其路径,例如:
根据一批起始顶点、多种边规则(包括方向、边的类型和属性过滤)和最大深度等条件查找符合条件的所有的路径终点的交集
sources:定义起始顶点,必填项,指定方式包括:
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
path_patterns:表示从起始顶点走过的路径规则,是一组规则的列表。必填项。每个规则是一个PathPattern
skip_degree >= max_degree
约束,默认为0 (不启用),表示不跳过任何点 (注意: 开启此配置后,遍历时会尝试访问一个顶点的 skip_degree 条边,而不仅仅是 max_degree 条边,这样有额外的遍历开销,对查询性能影响可能有较大影响,请确认理解后再开启)capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的路径的最大数目,选填项,默认为10
with_path:true表示返回交点所在的路径,false表示不返回交点所在的路径,选填项,默认为false
with_vertex,选填项,默认为false:
POST http://localhost:8080/graphs/{graph}/traversers/customizedcrosspoints
-
{
+
Used to find the intersection points and their paths between two vertices, such as:
Finds the intersection of destination vertices that satisfy the specified conditions, including starting vertices, multiple edge rules (including direction, edge type, and property filters), and maximum depth.
sources: Defines the starting vertices, required. The specified options include:
Note: Property values in properties can be a list, indicating that the value of the key can be any item in the list.
path_patterns: Represents the path rules to be followed from the starting vertices. It is a list of rules. Required. Each rule is a PathPattern.
skip_degree >= max_degree
. Default is 0 (not enabled), which means no vertices are skipped. Note: When this configuration is enabled, the traversal process will attempt to visit skip_degree edges of a vertex, not just max_degree edges. This incurs additional traversal overhead and may significantly impact query performance. Please make sure you understand it before enabling.capacity: Maximum number of vertices to be visited during the traversal process. Optional. Default is 10000000.
limit: Maximum number of paths to be returned. Optional. Default is 10.
with_path: When set to true, returns the paths where the intersection points are located. When set to false, does not return the paths. Optional. Default is false.
with_vertex: Optional. Default is false.
POST http://localhost:8080/graphs/{graph}/traversers/customizedcrosspoints
+
{
"sources":{
"ids":[
"2:lop",
@@ -4444,8 +4438,8 @@
}
]
}
-
查询一组顶点通过多种路径在终点有交集的情况。例如:
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找可达的环路
例如:1 -> 25 -> 775 -> 14690 -> 25, 其中环路为 25 -> 775 -> 14690 -> 25
GET http://localhost:8080/graphs/{graph}/traversers/rings?source="1:marko"&max_depth=2
-
200
+
Used to query a group of vertices that have intersections at the destination through multiple paths. For example:
Finds reachable cycles based on the specified conditions, including starting vertices, direction, edge types (optional), and maximum depth.
For example: 1 -> 25 -> 775 -> 14690 -> 25, where the cycle is 25 -> 775 -> 14690 -> 25.
GET http://localhost:8080/graphs/{graph}/traversers/rings?source="1:marko"&max_depth=2
+
200
{
"rings":[
{
@@ -4471,8 +4465,8 @@
}
]
}
-
查询起始顶点可达的环路,例如:
根据起始顶点、方向、边的类型(可选)和最大深度等条件查找发散到边界顶点的路径
例如:1 -> 25 -> 775 -> 14690 -> 2289 -> 18379, 其中 18379 为边界顶点,即没有从 18379 发出的边
GET http://localhost:8080/graphs/{graph}/traversers/rays?source="1:marko"&max_depth=2&direction=OUT
-
200
+
Used to query cycles reachable from the starting vertex, for example:
Finds paths that diverge from the starting vertex and reach boundary vertices based on the specified conditions, including starting vertices, direction, edge types (optional), and maximum depth.
For example: 1 -> 25 -> 775 -> 14690 -> 2289 -> 18379, where 18379 is the boundary vertex, meaning there are no edges emitted from 18379.
GET http://localhost:8080/graphs/{graph}/traversers/rays?source="1:marko"&max_depth=2&direction=OUT
+
200
{
"rays":[
{
@@ -4503,8 +4497,8 @@
}
]
}
-
查找起始顶点到某种关系的边界顶点的路径,例如:
按照条件查询一批顶点对应的"梭形相似点"。当两个顶点跟很多共同的顶点之间有某种关系的时候,我们认为这两个点为"梭形相似点"。举个例子说明"梭形相似点":“读者A"读了100本书,可以定义读过这100本书中的80本以上的读者,是"读者A"的"梭形相似点”
sources:定义起始顶点,必填项,指定方式包括:
注意:properties中的属性值可以是列表,表示只要key对应的value在列表中就可以
label:边的类型,选填项,默认代表所有edge label
direction:起始顶点向外发散的方向(OUT,IN,BOTH),选填项,默认是BOTH
min_neighbors:最少邻居数目,邻居数目少于这个阈值时,认为起点不具备"梭形相似点"。比如想要找一个"读者A"读过的书的"梭形相似点",那么min_neighbors
为100时,表示"读者A"至少要读过100本书才可以有"梭形相似点",必填项
alpha:相似度,代表:起点与"梭形相似点"的共同邻居数目占起点的全部邻居数目的比例,必填项
min_similars:“梭形相似点"的最少个数,只有当起点的"梭形相似点"数目大于或等于该值时,才会返回起点及其"梭形相似点”,选填项,默认值为1
top:返回一个起点的"梭形相似点"中相似度最高的top个,必填项,0表示全部
group_property:与min_groups
一起使用,当起点跟其所有的"梭形相似点"某个属性的值有至少min_groups
个不同值时,才会返回该起点及其"梭形相似点"。比如为"读者A"推荐"异地"书友时,需要设置group_property
为读者的"城市"属性,min_group
至少为2,选填项,不填代表不需要根据属性过滤
min_groups:与group_property
一起使用,只有group_property
设置时才有意义
max_degree:查询过程中,单个顶点遍历的最大邻接边数目,选填项,默认为10000
capacity:遍历过程中最大的访问的顶点数目,选填项,默认为10000000
limit:返回的结果数目上限(一个起点及其"梭形相似点"算一个结果),选填项,默认为10
with_intermediary:是否返回起点及其"梭形相似点"共同关联的中间点,默认为false
with_vertex,选填项,默认为false:
POST http://localhost:8080/graphs/hugegraph/traversers/fusiformsimilarity
-
{
+
Used to find paths from the starting vertex to boundary vertices based on a specific relationship, for example:
Queries a batch of “fusiform similar vertices” based on specified conditions. When two vertices share a certain relationship with many common vertices, they are considered “fusiform similar vertices.” For example, if “Reader A” has read 100 books, readers who have read 80 or more of these 100 books can be defined as “fusiform similar vertices” of “Reader A.”
sources: Starting vertices, required. Specify using:
Note: Property values in properties can be a list, indicating that the value of the key can be any value in the list.
label: Edge type. Optional. Default represents all edge labels.
direction: Direction in which the starting vertex diverges (OUT, IN, BOTH). Optional. Default is BOTH.
min_neighbors: Minimum number of neighbors. If the number of neighbors is less than this threshold, the starting vertex is not considered a “fusiform similar vertex.” For example, if you want to find “fusiform similar vertices” of books read by “Reader A,” and min_neighbors is set to 100, it means that “Reader A” must have read at least 100 books to have “fusiform similar vertices.” Required.
alpha: Similarity, representing the proportion of common neighbors between the starting vertex and “fusiform similar vertices” to all neighbors of the starting vertex. Required.
min_similars: Minimum number of “fusiform similar vertices.” Only when the number of “fusiform similar vertices” of the starting vertex is greater than or equal to this value, the starting vertex and its “fusiform similar vertices” will be returned. Optional. Default is 1.
top: Returns the top highest similarity “fusiform similar vertices” of a starting vertex. Required. 0 means all.
group_property: Used together with min_groups. Returns the starting vertex and its “fusiform similar vertices” only if there are at least min_groups different values for a certain attribute of the starting vertex and its “fusiform similar vertices.” For example, when recommending “out-of-town” book buddies for “Reader A,” set group_property to the “city” attribute of readers and min_group to at least 2. Optional. If not specified, no filtering based on attributes is needed.
min_groups: Used together with group_property. Only meaningful when group_property is set.
max_degree: Maximum number of adjacent edges to traverse for each vertex during the query process. Optional. Default is 10000.
capacity: Maximum number of vertices to be visited during the traversal process. Optional. Default is 10000000.
limit: Maximum number of results to be returned (one starting vertex and its “fusiform similar vertices” count as one result). Optional. Default is 10.
with_intermediary: Whether to return the starting vertex and the intermediate vertices that are commonly related to the “fusiform
similar vertices.” Default is false.
POST http://localhost:8080/graphs/hugegraph/traversers/fusiformsimilarity
+
{
"sources":{
"ids":[],
"label": "person",
@@ -4574,8 +4568,8 @@
}
]
}
-
查询一组顶点相似度很高的顶点。例如:
GET http://localhost:8080/graphs/hugegraph/traversers/vertices?ids="1:marko"&ids="2:lop"
-
200
+
Used to query vertices that have high similarity with a group of vertices. For example:
GET http://localhost:8080/graphs/hugegraph/traversers/vertices?ids="1:marko"&ids="2:lop"
+
200
{
"vertices":[
{
@@ -4630,8 +4624,8 @@
}
]
}
-
通过指定的分片大小split_size,获取顶点分片信息(可以与 3.2.21.3 中的 Scan 配合使用来获取顶点)。
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/shards?split_size=67108864
-
200
+
Obtain vertex shard information by specifying the shard size split_size
(can be used in conjunction with Scan in 3.2.21.3 to retrieve vertices).
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/shards?split_size=67108864
+
200
{
"shards":[
{
@@ -4657,8 +4651,8 @@
......
]
}
-
通过指定的分片信息批量查询顶点(Shard信息的获取参见 3.2.21.2 Shard)。
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/scan?start=0&end=4294967295
-
200
+
Retrieve vertices in batches based on the specified shard information (refer to 3.2.21.2 Shard for obtaining shard information).
GET http://localhost:8080/graphs/hugegraph/traversers/vertices/scan?start=0&end=4294967295
+
200
{
"vertices":[
{
@@ -4813,8 +4807,8 @@
}
]
}
-
GET http://localhost:8080/graphs/hugegraph/traversers/edges?ids="S1:josh>1>>S2:lop"&ids="S1:josh>1>>S2:ripple"
-
200
+
GET http://localhost:8080/graphs/hugegraph/traversers/edges?ids="S1:josh>1>>S2:lop"&ids="S1:josh>1>>S2:ripple"
+
200
{
"edges": [
{
@@ -4845,8 +4839,8 @@
}
]
}
-
通过指定的分片大小split_size,获取边分片信息(可以与 3.2.22.3 中的 Scan 配合使用来获取边)。
GET http://localhost:8080/graphs/hugegraph/traversers/edges/shards?split_size=4294967295
-
200
+
Retrieve shard information for edges by specifying the shard size (split_size
). This can be used in conjunction with the Scan operation described in section 3.2.22.3 to retrieve edges.
GET http://localhost:8080/graphs/hugegraph/traversers/edges/shards?split_size=4294967295
+
200
{
"shards":[
{
@@ -4876,8 +4870,8 @@
}
]
}
-
通过指定的分片信息批量查询边(Shard信息的获取参见 3.2.22.2)。
GET http://localhost:8080/graphs/hugegraph/traversers/edges/scan?start=0&end=3221225469
-
200
+
Batch retrieve edges by specifying shard information (refer to section 3.2.22.2 for shard retrieval).
page
is empty, it indicates the first page of pagination starting from the position indicated by start
.GET http://localhost:8080/graphs/hugegraph/traversers/edges/scan?start=0&end=3221225469
+
200
{
"edges":[
{
@@ -4960,7 +4954,7 @@
}
]
}
-
Not only the Graph iteration (traverser) method, HugeGraph-Server also provide Rank API
for recommendation purpose.
+
Not only the Graph iteration (traverser) method, HugeGraph-Server also provide Rank API
for recommendation purpose.
You can use it to recommend some vertexes much closer to a vertex.
A typical scenario for Personal Rank
algorithm is in recommendation application. According to the out edges of a vertex,
recommend some other vertices that having the same or similar edges.
Here is a use case: According to someone’s reading habit or reading history, we can recommend some books he may be interested or some book pal.
For Example:
a,b,c,d,e
. If we want to recommend some book pal and books for tom, an easier idea is let’s check whoever also liked these books (common hobby based).b,d,f
. And Jay, he like 4 books c,d,e,g
, and Lee, he also like 4 books a,d,e,f
.The case above is simple. Here we also provide a public test dataset MovieLens for use case. @@ -5044,8 +5038,8 @@ ] }
Note: modify the
input.path
to your local path.
suitable for bipartite graph, will return all vertex or a list of its correlation which related to all source vertex.
Bipartite Graph is a special model in Graph Theory, as well as a special flow in network. The strongest feature is, it split all vertex in graph into two sets. The vertex in the set is not connected. However,the vertex in two sets may connect with each other.
Suppose we have one bipartite graph based on user and things. -A random walk based PersonalRank algorithm should be likes this:
rating
, to find a common judge.Required:
Optional:
10000
5
BOTH_LABEL
,optional list as follows:100
0.0001
(will implement soon)true
POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
-
{
+A random walk based PersonalRank algorithm should be likes this:- Choose a user u as start vertex, let’s set the initial weight to be 1.0 . Go from Vu with probability alpha to a neighbor vertex, and (1-alpha) to stay.
- If we decide to go outside, we would like to choose an edge, such as
rating
, to find a common judge.- Then choose the neighbors of current vertex randomly with uniform distribution, and reset the weights with uniform distribution.
- Compensate the source vertex’s weight with (1 - alpha)
- Repeat step 2;
- Convergence after reaching a certain number of steps or precision, then we got a recommend-list.
Params
Required:
- source: the id of source vertex
- label: edge label go from the source vertex, should connect two different type of vertex
Optional:
- alpha: the probability of going out for one vertex in each iteration,similar to the alpha of PageRank,required, value range is (0, 1], default 0.85.
- max_degree: in query process, the max iteration number of adjacency edge for a vertex, default
10000
- max_depth: iteration number,range [2, 50], default
5
- with_label:result filter,default
BOTH_LABEL
,optional list as follows:- SAME_LABEL:Only keep vertex which has the same type as source vertex
- OTHER_LABEL:Only keep vertex which has different type as source vertex (the another part in bipartite graph)
- BOTH_LABEL:Keep both type vertex
- limit: max return vertex number,default
100
- max_diff: accuracy for convergence, default
0.0001
(will implement soon) - sorted: whether sort the result by rank or not, true for descending sort, false for none, default
true
4.2.1.2 Usage
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/personalrank
+
Request Body
{
"source": "1:1",
"label": "rating",
"alpha": 0.6,
@@ -5147,8 +5141,8 @@
}
}
4.2.2.1 Function Introduction
In a general graph structure,find the first N vertices of each layer with the highest correlation with a given starting point and their relevance.
In graph words: to go out from the starting point, get the probability of going to each vertex of each layer.
Params
- source: id of source vertex,required
- alpha:the probability of going out for one vertex in each iteration,similar to the alpha of PageRank,required, value range is (0, 1]
- steps: a path rule for source vertex visited,it’s a list of Step,each Step map to a layout in result,required.The structure of each Step as follows:
- direction:the direction of edge(OUT, IN, BOTH), BOTH for default.
- labels:a list of edge types, will union all edge types
- max_degree:in query process, the max iteration number of adjacency edge for a vertex, default
10000
-(Note: before v0.12 step only support degree as parameter name, from v0.12, use max_degree, compatible with degree) - top: retains only the top N results with the highest weight in each layer of the results, default 100, max 1000
- capacity: the maximum number of vertexes visited during the traversal, optional, default 10000000
4.2.2.2 Usage
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
-
Request Body
{
+(Note: before v0.12 step only support degree as parameter name, from v0.12, use max_degree, compatible with degree)top: retains only the top N results with the highest weight in each layer of the results, default 100, max 1000 capacity: the maximum number of vertexes visited during the traversal, optional, default 10000000 4.2.2.2 Usage
Method & Url
POST http://localhost:8080/graphs/hugegraph/traversers/neighborrank
+
Request Body
{
"source":"O",
"steps":[
{
@@ -5206,128 +5200,128 @@
}
]
}
-
4.2.2.3 Suitable Scenario
Find the vertices in different layers for a given start point that should be most recommended
- For example, in the four-layered structure of the audience, friends, movies, and directors, according to the movies that a certain audience’s friends like, recommend movies for that audience, or recommend directors for those movies based on who made them.
5.1.11 - Variable API
5.1 Variables
Variables可以用来存储有关整个图的数据,数据按照键值对的方式存取
5.1.1 创建或者更新某个键值对
Method & Url
PUT http://localhost:8080/graphs/hugegraph/variables/name
-
Request Body
{
+
4.2.2.3 Suitable Scenario
Find the vertices in different layers for a given start point that should be most recommended
- For example, in the four-layered structure of the audience, friends, movies, and directors, according to the movies that a certain audience’s friends like, recommend movies for that audience, or recommend directors for those movies based on who made them.
5.1.11 - Variable API
5.1 Variables
Variables can be used to store data about the entire graph. The data is accessed and stored in the form of key-value pairs.
5.1.1 Creating or Updating a Key-Value Pair
Method & Url
PUT http://localhost:8080/graphs/hugegraph/variables/name
+
Request Body
{
"data": "tom"
}
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.2 列出全部键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables
-
Response Status
200
+
5.1.2 Listing all key-value pairs
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables
+
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.3 列出某个键值对
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables/name
-
Response Status
200
+
5.1.3 Listing a specific key-value pair
Method & Url
GET http://localhost:8080/graphs/hugegraph/variables/name
+
Response Status
200
Response Body
{
"name": "tom"
}
-
5.1.4 删除某个键值对
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/variables/name
-
Response Status
204
-
5.1.12 - Graphs API
6.1 Graphs
6.1.1 List all graphs
Method & Url
GET http://localhost:8080/graphs
-
Response Status
200
+
5.1.4 Deleting a specific key-value pair
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/variables/name
+
Response Status
204
+
5.1.12 - Graphs API
6.1 Graphs
6.1.1 List all graphs
Method & Url
GET http://localhost:8080/graphs
+
Response Status
200
Response Body
{
"graphs": [
"hugegraph",
"hugegraph1"
]
}
-
6.1.2 Get details of the graph
Method & Url
GET http://localhost:8080/graphs/hugegraph
-
Response Status
200
+
6.1.2 Get details of the graph
Method & Url
GET http://localhost:8080/graphs/hugegraph
+
Response Status
200
Response Body
{
"name": "hugegraph",
"backend": "cassandra"
}
6.1.3 Clear all data of a graph,include: schema、vertex、edge and index .etc,This operation requires administrator privileges
Params
Since emptying the graph is a dangerous operation, we have added parameters for confirmation to the API to
-avoid false calls by users:
- confirm_message: default by
I'm sure to delete all data
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/clear?confirm_message=I%27m+sure+to+delete+all+data
-
Response Status
204
+avoid false calls by users:- confirm_message: default by
I'm sure to delete all data
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/clear?confirm_message=I%27m+sure+to+delete+all+data
+
Response Status
204
6.1.4 Clone graph,This operation requires administrator privileges
Params
- clone_graph_name: name of an exist graph.
To clone from an existing graph, the user can choose to transfer the configuration file,
-which will replace the configuration in the existing graph
Method & Url
POST http://localhost:8080/graphs/hugegraph_clone?clone_graph_name=hugegraph
-
Request Body [Optional]
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-backend=rocksdb
-serializer=binary
-store=hugegraph_clone
-rocksdb.data_path=./hg2
-rocksdb.wal_path=./hg2
-
Response Status
200
+which will replace the configuration in the existing graphMethod & Url
POST http://localhost:8080/graphs/hugegraph_clone?clone_graph_name=hugegraph
+
Request Body [Optional]
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+backend=rocksdb
+serializer=binary
+store=hugegraph_clone
+rocksdb.data_path=./hg2
+rocksdb.wal_path=./hg2
+
Response Status
200
Response Body
{
"name": "hugegraph_clone",
"backend": "rocksdb"
}
-
6.1.5 Create graph,This operation requires administrator privileges
Method & Url
POST http://localhost:8080/graphs/hugegraph2
-
Request Body
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
-backend=rocksdb
-serializer=binary
-store=hugegraph2
-rocksdb.data_path=./hg2
-rocksdb.wal_path=./hg2
-
Response Status
200
+
6.1.5 Create graph,This operation requires administrator privileges
Method & Url
POST http://localhost:8080/graphs/hugegraph2
+
Request Body
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
+backend=rocksdb
+serializer=binary
+store=hugegraph2
+rocksdb.data_path=./hg2
+rocksdb.wal_path=./hg2
+
Response Status
200
Response Body
{
"name": "hugegraph2",
"backend": "rocksdb"
}
6.1.6 Delete graph and it’s data
Params
Since deleting a graph is a dangerous operation, we have added parameters for confirmation to the API to
-avoid false calls by users:
- confirm_message: default by
I'm sure to drop the graph
Method & Url
DELETE http://localhost:8080/graphs/hugegraph_clone?confirm_message=I%27m%20sure%20to%20drop%20the%20graph
-
Response Status
204
-
6.2 Conf
6.2.1 Get configuration for a graph,This operation requires administrator privileges
Method & Url
GET http://localhost:8080/graphs/hugegraph/conf
-
Response Status
200
-
Response Body
# gremlin entrence to create graph
-gremlin.graph=com.baidu.hugegraph.HugeFactory
-
-# cache config
-#schema.cache_capacity=1048576
-#graph.cache_capacity=10485760
-#graph.cache_expire=600
-
-# schema illegal name template
-#schema.illegal_name_regex=\s+|~.*
-
-#vertex.default_label=vertex
-
-backend=cassandra
-serializer=cassandra
-
-store=hugegraph
-...
-
6.3 Mode
Allowed graph mode values are:NONE,RESTORING,MERGING,LOADING
- None mode is regular mode
- Not allowed create schema with specified id
- Not support create vertex with id for AUTOMATIC id strategy
- LOADING mode used to load data via hugegraph-loader.
- When adding vertices / edges, it is not checked whether the required attributes are passed in
Restore has two different modes: Restoring and Merging
- Restoring mode is used to restore schema and graph data to an new graph.
- Support create schema with specified id
- Support create vertex with id for AUTOMATIC id strategy
- Merging mode is used to merge schema and graph data to an existing graph.
- Not allowed create schema with specified id
- Support create vertex with id for AUTOMATIC id strategy
Under normal circumstances, the graph mode is None. When you need to restore the graph,
+avoid false calls by users:
- confirm_message: default by
I'm sure to drop the graph
Method & Url
DELETE http://localhost:8080/graphs/hugegraph_clone?confirm_message=I%27m%20sure%20to%20drop%20the%20graph
+
Response Status
204
+
6.2 Conf
6.2.1 Get configuration for a graph,This operation requires administrator privileges
Method & Url
GET http://localhost:8080/graphs/hugegraph/conf
+
Response Status
200
+
Response Body
# gremlin entrence to create graph
+gremlin.graph=com.baidu.hugegraph.HugeFactory
+
+# cache config
+#schema.cache_capacity=1048576
+#graph.cache_capacity=10485760
+#graph.cache_expire=600
+
+# schema illegal name template
+#schema.illegal_name_regex=\s+|~.*
+
+#vertex.default_label=vertex
+
+backend=cassandra
+serializer=cassandra
+
+store=hugegraph
+...
+
6.3 Mode
Allowed graph mode values are:NONE,RESTORING,MERGING,LOADING
- None mode is regular mode
- Not allowed create schema with specified id
- Not support create vertex with id for AUTOMATIC id strategy
- LOADING mode used to load data via hugegraph-loader.
- When adding vertices / edges, it is not checked whether the required attributes are passed in
Restore has two different modes: Restoring and Merging
- Restoring mode is used to restore schema and graph data to an new graph.
- Support create schema with specified id
- Support create vertex with id for AUTOMATIC id strategy
- Merging mode is used to merge schema and graph data to an existing graph.
- Not allowed create schema with specified id
- Support create vertex with id for AUTOMATIC id strategy
Under normal circumstances, the graph mode is None. When you need to restore the graph,
you need to temporarily modify the graph mode to Restoring or Merging as needed.
-When you complete the restore, change the graph mode to None.
6.3.1 Get graph mode.
Method & Url
GET http://localhost:8080/graphs/hugegraph/mode
-
Response Status
200
+When you complete the restore, change the graph mode to None.6.3.1 Get graph mode.
Method & Url
GET http://localhost:8080/graphs/hugegraph/mode
+
Response Status
200
Response Body
{
"mode": "NONE"
}
-
Allowed graph mode values are:NONE,RESTORING,MERGING
6.3.2 Modify graph mode. This operation requires administrator privileges
Method & Url
PUT http://localhost:8080/graphs/hugegraph/mode
-
Request Body
"RESTORING"
-
Allowed graph mode values are:NONE,RESTORING,MERGING
Response Status
200
+
Allowed graph mode values are:NONE,RESTORING,MERGING
6.3.2 Modify graph mode. This operation requires administrator privileges
Method & Url
PUT http://localhost:8080/graphs/hugegraph/mode
+
Request Body
"RESTORING"
+
Allowed graph mode values are:NONE,RESTORING,MERGING
Response Status
200
Response Body
{
"mode": "RESTORING"
}
-
6.3.3 Get graph’s read mode.
Params
- name: name of a graph
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph_read_mode
-
Response Status
200
+
6.3.3 Get graph’s read mode.
Params
- name: name of a graph
Method & Url
GET http://localhost:8080/graphs/hugegraph/graph_read_mode
+
Response Status
200
Response Body
{
"graph_read_mode": "ALL"
}
-
6.3.4 Modify graph’s read mode. This operation requires administrator privileges
Params
- name: name of a graph
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
-
Request Body
"OLTP_ONLY"
-
Allowed read mode values are:ALL,OLTP_ONLY,OLAP_ONLY
Response Status
200
+
6.3.4 Modify graph’s read mode. This operation requires administrator privileges
Params
- name: name of a graph
Method & Url
PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
+
Request Body
"OLTP_ONLY"
+
Allowed read mode values are:ALL,OLTP_ONLY,OLAP_ONLY
Response Status
200
Response Body
{
"graph_read_mode": "OLTP_ONLY"
}
-
6.4 Snapshot
6.4.1 Create a snapshot
Params
- name: name of a graph
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_create
-
Response Status
200
+
6.4 Snapshot
6.4.1 Create a snapshot
Params
- name: name of a graph
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_create
+
Response Status
200
Response Body
{
"hugegraph": "snapshot_created"
}
-
6.4.2 Resume a snapshot
Params
- name: name of a graph
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_resume
-
Response Status
200
+
6.4.2 Resume a snapshot
Params
- name: name of a graph
Method & Url
PUT http://localhost:8080/graphs/hugegraph/snapshot_resume
+
Response Status
200
Response Body
{
"hugegraph": "snapshot_resumed"
}
-
6.5 Compact
6.5.1 Manually compact graph,This operation requires administrator privileges
Params
- name: name of a graph
Method & Url
PUT http://localhost:8080/graphs/hugegraph/compact
-
Response Status
200
+
6.5 Compact
6.5.1 Manually compact graph,This operation requires administrator privileges
Params
- name: name of a graph
Method & Url
PUT http://localhost:8080/graphs/hugegraph/compact
+
Response Status
200
Response Body
{
"nodes": 1,
"cluster_id": "local",
@@ -5335,8 +5329,8 @@
"local": "OK"
}
}
-
5.1.13 - Task API
7.1 Task
7.1.1 List all async tasks in graph
Params
- status: the status of asyncTasks
- limit:the max number of tasks to return
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks?status=success
-
Response Status
200
+
5.1.13 - Task API
7.1 Task
7.1.1 List all async tasks in graph
Params
- status: the status of asyncTasks
- limit:the max number of tasks to return
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks?status=success
+
Response Status
200
Response Body
{
"tasks": [{
"task_name": "hugegraph.traversal().V()",
@@ -5352,8 +5346,8 @@
"task_input": "{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}]
}
-
7.1.2 View the details of an async task
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks/2
-
Response Status
200
+
7.1.2 View the details of an async task
Method & Url
GET http://localhost:8080/graphs/hugegraph/tasks/2
+
Response Status
200
Response Body
{
"task_name": "hugegraph.traversal().V()",
"task_progress": 0,
@@ -5367,8 +5361,8 @@
"task_callable": "com.baidu.hugegraph.api.job.GremlinAPI$GremlinJob",
"task_input": "{\"gremlin\":\"hugegraph.traversal().V()\",\"bindings\":{},\"language\":\"gremlin-groovy\",\"aliases\":{\"hugegraph\":\"graph\"}}"
}
-
7.1.3 Delete task information of an async task,won’t delete the task itself
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/tasks/2
-
Response Status
204
+
7.1.3 Delete task information of an async task,won’t delete the task itself
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/tasks/2
+
Response Status
204
7.1.4 Cancel an async task, the task should be able to be canceled
If you already created an async task via Gremlin API as follows:
"for (int i = 0; i < 10; i++) {" +
"hugegraph.addVertex(T.label, 'man');" +
"hugegraph.tx().commit();" +
@@ -5378,13 +5372,13 @@
"break;" +
"}" +
"}"
-
Method & Url
PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
-
cancel it in 10s. if more than 10s,the task may already be finished, then can’t be cancelled.
Response Status
202
+
Method & Url
PUT http://localhost:8080/graphs/hugegraph/tasks/2?action=cancel
+
cancel it in 10s. if more than 10s,the task may already be finished, then can’t be cancelled.
Response Status
202
Response Body
{
"cancelled": true
}
-
At this point, the number of vertices whose label is man must be less than 10.
5.1.14 - Gremlin API
8.1 Gremlin
8.1.1 向HugeGraphServer发送gremlin语句(GET),同步执行
Params
- gremlin: 要发送给
HugeGraphServer
执行的gremlin
语句 - bindings: 用来绑定参数,key是字符串,value是绑定的值(只能是字符串或者数字),功能类似于MySQL的 Prepared Statement,用于加速语句执行
- language: 发送语句的语言类型,默认为
gremlin-groovy
- aliases: 为存在于图空间的已有变量添加别名
查询顶点
Method & Url
GET http://127.0.0.1:8080/gremlin?gremlin=hugegraph.traversal().V('1:marko')
-
Response Status
200
+
At this point, the number of vertices whose label is man must be less than 10.
5.1.14 - Gremlin API
8.1 Gremlin
8.1.1 Sending a gremlin statement (GET) to HugeGraphServer for synchronous execution
Params
- gremlin: The gremlin statement to be sent to
HugeGraphServer
for execution - bindings: Used to bind parameters. Key is a string, and the value is the bound value (can only be a string or number). This functionality is similar to MySQL’s Prepared Statement and is used to speed up statement execution.
- language: The language type of the sent statement. Default is
gremlin-groovy
. - aliases: Adds aliases for existing variables in the graph space.
Querying vertices
Method & Url
GET http://127.0.0.1:8080/gremlin?gremlin=hugegraph.traversal().V('1:marko')
+
Response Status
200
Response Body
{
"requestId": "c6ef47a8-b634-4b07-9d38-6b3b69a3a556",
"status": {
@@ -5415,8 +5409,8 @@
"meta": {}
}
}
-
8.1.2 向HugeGraphServer发送gremlin语句(POST),同步执行
Method & Url
POST http://localhost:8080/gremlin
-
查询顶点
Request Body
{
+
8.1.2 Sending a gremlin statement (POST) to HugeGraphServer for synchronous execution
Method & Url
POST http://localhost:8080/gremlin
+
Querying vertices
Request Body
{
"gremlin": "hugegraph.traversal().V('1:marko')",
"bindings": {},
"language": "gremlin-groovy",
@@ -5453,10 +5447,7 @@
"meta": {}
}
}
-
注意:
这里是直接使用图对象(hugegraph),先获取其遍历器(traversal()),再获取顶点。
-不能直接写成graph.traversal().V()
或g.V()
,可以通过"aliases": {"graph": "hugegraph", "g": "__g_hugegraph"}
-为图和遍历器添加别名后使用别名操作。其中,hugegraph
是原生存在的变量,__g_hugegraph
是HugeGraphServer
额外添加的变量,
-每个图都会存在一个对应的这样格式(_g${graph})的遍历器对象。
响应体的结构与其他 Vertex 或 Edge 的 RESTful API的结构有区别,用户可能需要自行解析。
查询边
Request Body
{
+
Note:
Here we directly use the graph object (hugegraph
), first retrieve its traversal iterator (traversal()
), and then retrieve the vertices. Instead of writing graph.traversal().V()
or g.V()
, you can use aliases to operate on the graph and traversal iterator. In this case, hugegraph
is a native variable, and __g_hugegraph
is an additional variable added by HugeGraphServer. Each graph will have a corresponding traversal iterator object in this format (__g_${graph}
).
The structure of the response body is different from the RESTful API structure of other vertices or edges. Users may need to parse it manually.
Querying edges
Request Body
{
"gremlin": "g.E('S1:marko>2>>S2:lop')",
"bindings": {},
"language": "gremlin-groovy",
@@ -5490,19 +5481,18 @@
"meta": {}
}
}
-
8.1.3 向HugeGraphServer发送gremlin语句(POST),异步执行
Method & Url
POST http://localhost:8080/graphs/hugegraph/jobs/gremlin
-
查询顶点
Request Body
{
+
8.1.3 Sending a gremlin statement (POST) to HugeGraphServer for asynchronous execution
Method & Url
POST http://localhost:8080/graphs/hugegraph/jobs/gremlin
+
Querying vertices
Request Body
{
"gremlin": "g.V('1:marko')",
"bindings": {},
"language": "gremlin-groovy",
"aliases": {}
}
-
注意:
异步执行Gremlin语句暂不支持aliases,可以使用 graph
代表要操作的图,也可以直接使用图的名字, 例如 hugegraph
;
-另外g
代表 traversal,等价于 graph.traversal()
或者 hugegraph.traversal()
Response Status
201
+
Note:
Asynchronous execution of Gremlin statements does not currently support aliases. You can use graph
to represent the graph you want to operate on, or directly use the name of the graph, such as hugegraph
. Additionally, g
represents the traversal, which is equivalent to graph.traversal()
or hugegraph.traversal()
.
Response Status
201
Response Body
{
"task_id": 1
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1
(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
查询边
Request Body
{
+
Note:
You can query the execution status of an asynchronous task by using GET http://localhost:8080/graphs/hugegraph/tasks/1
(where “1” is the task_id). For more information, refer to the Asynchronous Task RESTful API.
Querying edges
Request Body
{
"gremlin": "g.E('S1:marko>2>>S2:lop')",
"bindings": {},
"language": "gremlin-groovy",
@@ -5512,20 +5502,15 @@
Response Body
{
"task_id": 2
}
-
注:
可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2
(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API
5.1.15 - Authentication API
9.1 用户认证与权限控制
开启权限及相关配置请先参考 权限配置 文档
用户认证与权限控制概述:
HugeGraph支持多用户认证、以及细粒度的权限访问控制,采用基于“用户-用户组-操作-资源”的4层设计,灵活控制用户角色与权限。
-资源描述了图数据库中的数据,比如符合某一类条件的顶点,每一个资源包括type、label、properties三个要素,共有18种type、
-任意label、任意properties的组合形成的资源,一个资源的内部条件是且关系,多个资源之间的条件是或关系。用户可以属于一个或多个用户组,
-每个用户组可以拥有对任意个资源的操作权限,操作类型包括:读、写、删除、执行等种类。 HugeGraph支持动态创建用户、用户组、资源,
-支持动态分配或取消权限。初始化数据库时超级管理员用户被创建,后续可通过超级管理员创建各类角色用户,新创建的用户如果被分配足够权限后,可以由其创建或管理更多的用户。
举例说明:
user(name=boss) -belong-> group(name=all) -access(read)-> target(graph=graph1, resource={label: person,
-city: Beijing})
描述:用户’boss’拥有对’graph1’图中北京人的读权限。
接口说明:
用户认证与权限控制接口包括5类:UserAPI、GroupAPI、TargetAPI、BelongAPI、AccessAPI。
9.2 用户(User)API
用户接口包括:创建用户,删除用户,修改用户,和查询用户相关信息接口。
9.2.1 创建用户
Params
- user_name: 用户名称
- user_password: 用户密码
- user_phone: 用户手机号
- user_email: 用户邮箱
其中 user_name 和 user_password 为必填。
Request Body
{
+
Note:
You can query the execution status of an asynchronous task by using GET http://localhost:8080/graphs/hugegraph/tasks/2
(where “2” is the task_id). For more information, refer to the Asynchronous Task RESTful API.
5.1.15 - Authentication API
9.1 User Authentication and Access Control
To enable authentication and related configurations, please refer to the Authentication Configuration documentation.
Overview of User Authentication and Access Control:
HugeGraph supports multi-user authentication and fine-grained access control. It adopts a 4-tier design based on “User-User Group-Operation-Resource” to flexibly control user roles and permissions. Resources describe data in the graph database, such as vertices that meet certain conditions. Each resource consists of three elements: type, label, and properties. There are a total of 18 types and combinations of any label and properties to form resources. The internal condition of a resource is an “AND” relationship, while the condition between multiple resources is an “OR” relationship. Users can belong to one or more user groups, and each user group can have permissions for any number of resources. The types of operations include read, write, delete, execute, etc. HugeGraph supports dynamically creating users, user groups, and resources, and supports dynamically assigning or revoking permissions. During the initialization of the database, a super administrator user is created, and subsequently, various role users can be created by the super administrator. If a newly created user is assigned sufficient permissions, they can create or manage more users.
Example:
user(name=boss) -belong-> group(name=all) -access(read)-> target(graph=graph1, resource={label: person, city: Beijing})
Description: User ‘boss’ has read permission for people in the ‘graph1’ graph from Beijing.
Interface Description:
The user authentication and access control interface includes 5 categories: UserAPI, GroupAPI, TargetAPI, BelongAPI, AccessAPI.
9.2 User (User) API
The user interface includes APIs for creating users, deleting users, modifying users, and querying user-related information.
9.2.1 Create User
Params
- user_name: User name
- user_password: User password
- user_phone: User phone number
- user_email: User email
Both user_name and user_password are required.
Request Body
{
"user_name": "boss",
"user_password": "******",
"user_phone": "182****9088",
"user_email": "123@xx.com"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/users
-
Response Status
201
-
Response Body
返回报文中,密码为加密后的密文
{
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/users
+
Response Status
201
+
Response Body
In the response message, the password is encrypted as ciphertext.
{
"user_password": "******",
"user_email": "123@xx.com",
"user_update": "2020-11-17 14:31:07.833",
@@ -5535,17 +5520,17 @@
"id": "-63:boss",
"user_create": "2020-11-17 14:31:07.833"
}
-
9.2.2 删除用户
Params
- id: 需要删除的用户 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/users/-63:test
-
Response Status
204
+
9.2.2 Delete User
Params
- id: User ID to be deleted
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/users/-63:test
+
Response Status
204
Response Body
1
-
9.2.3 修改用户
Params
- id: 需要修改的用户 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test
-
Request Body
修改user_name、user_password和user_phone
{
+
9.2.3 Modify User
Params
- id: User ID to be modified
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/users/-63:test
+
Request Body
Modify user_name, user_password, and user_phone.
{
"user_name": "test",
"user_password": "******",
"user_phone": "183****9266"
}
Response Status
200
-
Response Body
返回结果是包含修改过的内容在内的整个用户组对象
{
+
Response Body
The returned result is the entire user object including the modified content.
{
"user_password": "******",
"user_update": "2020-11-12 10:29:30.455",
"user_name": "test",
@@ -5554,8 +5539,8 @@
"id": "-63:test",
"user_create": "2020-11-12 10:27:13.601"
}
-
9.2.4 查询用户列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users
-
Response Status
200
+
9.2.4 Query User List
Params
- limit: Upper limit of the number of results returned
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users
+
Response Status
200
Response Body
{
"users": [
{
@@ -5568,8 +5553,8 @@
}
]
}
-
9.2.5 查询某个用户
Params
- id: 需要查询的用户 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:admin
-
Response Status
200
+
9.2.5 Query a User
Params
- id: User ID to be queried
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:admin
+
Response Status
200
Response Body
{
"users": [
{
@@ -5582,8 +5567,8 @@
}
]
}
-
9.2.6 查询某个用户的角色
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:boss/role
-
Response Status
200
+
9.2.6 Query Roles of a User
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/users/-63:boss/role
+
Response Status
200
Response Body
{
"roles": {
"hugegraph": {
@@ -5597,12 +5582,13 @@
}
}
}
-
9.3 用户组(Group)API
用户组会赋予相应的资源权限,用户会被分配不同的用户组,即可拥有不同的资源权限。
用户组接口包括:创建用户组,删除用户组,修改用户组,和查询用户组相关信息接口。
9.3.1 创建用户组
Params
- group_name: 用户组名称
- group_description: 用户组描述
Request Body
{
+
9.3 Group (Group) API
Groups grant corresponding resource permissions, and users are assigned to different groups, thereby having different resource permissions.
+The group interface includes APIs for creating groups, deleting groups, modifying groups, and querying group-related information.
9.3.1 Create Group
Params
- group_name: Group name
- group_description: Group description
Request Body
{
"group_name": "all",
"group_description": "group can do anything"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/groups
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/groups
+
Response Status
201
Response Body
{
"group_creator": "admin",
"group_name": "all",
@@ -5611,16 +5597,16 @@
"id": "-69:all",
"group_description": "group can do anything"
}
-
9.3.2 删除用户组
Params
- id: 需要删除的用户组 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
-
Response Status
204
+
9.3.2 Delete Group
Params
- id: Group ID to be deleted
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
+
Response Status
204
Response Body
1
-
9.3.3 修改用户组
Params
- id: 需要修改的用户组 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
-
Request Body
修改group_description
{
+
9.3.3 Modify Group
Params
- id: Group ID to be modified
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/groups/-69:grant
+
Request Body
Modify group_description
{
"group_name": "grant",
"group_description": "grant"
}
Response Status
200
-
Response Body
返回结果是包含修改过的内容在内的整个用户组对象
{
+
Response Body
The returned result is the entire group object including the modified content.
{
"group_creator": "admin",
"group_name": "grant",
"group_create": "2020-11-12 09:50:58.458",
@@ -5628,8 +5614,8 @@
"id": "-69:grant",
"group_description": "grant"
}
-
9.3.4 查询用户组列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups
-
Response Status
200
+
9.3.4 Query Group List
Params
- limit: Upper limit of the number of results returned
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups
+
Response Status
200
Response Body
{
"groups": [
{
@@ -5642,8 +5628,8 @@
}
]
}
-
9.3.5 查询某个用户组
Params
- id: 需要查询的用户组 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all
-
Response Status
200
+
9.3.5 Query a Specific Group
Params
- id: Group ID to be queried
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/groups/-69:all
+
Response Status
200
Response Body
{
"group_creator": "admin",
"group_name": "all",
@@ -5652,9 +5638,7 @@
"id": "-69:all",
"group_description": "group can do anything"
}
-
9.4 资源(Target)API
资源描述了图数据库中的数据,比如符合某一类条件的顶点,每一个资源包括type、label、properties三个要素,共有18种type、
-任意label、任意properties的组合形成的资源,一个资源的内部条件是且关系,多个资源之间的条件是或关系。
资源接口包括:资源的创建、删除、修改和查询。
9.4.1 创建资源
Params
- target_name: 资源名称
- target_graph: 资源图
- target_url: 资源地址
- target_resources: 资源定义(列表)
target_resources可以包括多个target_resource,以列表的形式存储。
每个target_resource包含:
- type:可选值 VERTEX, EDGE等, 可填ALL,则表示可以是顶点或边;
- label:可选值,⼀个顶点或边类型的名称,可填*,则表示任意类型;
- properties:map类型,可包含多个属性的键值对,必须匹配所有属性值,属性值⽀持填条件范围(age:
-P.gte(18)),properties如果为null表示任意属性均可,如果属性名和属性值均为‘*ʼ也表示任意属性均可。
如精细资源:“target_resources”: [{“type”:“VERTEX”,“label”:“person”,“properties”:{“city”:“Beijing”,“age”:“P.gte(20)”}}]**
资源定义含义:类型是’person’的顶点,且城市属性是’Beijing’,年龄属性大于等于20。
Request Body
{
+
9.4 Resource (Target) API
Resources describe data in the graph database, such as vertices that meet certain criteria. Each resource includes three elements: type, label, and properties. There are 18 types in total, and the combination of any label and any properties forms a resource. The internal conditions of a resource are based on the AND relationship, while the conditions between multiple resources are based on the OR relationship.
The resource API includes creating, deleting, modifying, and querying resources.
9.4.1 Create Resource
Params
- target_name: Name of the resource
- target_graph: Graph of the resource
- target_url: URL of the resource
- target_resources: Resource definitions (list)
target_resources can include multiple target_resource, stored in the form of a list.
Each target_resource contains:
- type: Optional value: VERTEX, EDGE, etc. Can be filled with ALL, indicating it can be a vertex or edge.
- label: Optional value: name of a vertex or edge type. Can be filled with *, indicating any type.
- properties: Map type, can contain multiple key-value pairs of properties. Must match all property values. Property values can support conditional ranges (e.g., age: P.gte(18)). If properties are null, it means any property is allowed. If both the property name and value are ‘*’, it also means any property is allowed.
For example, a specific resource: “target_resources”: [{“type”:“VERTEX”,“label”:“person”,“properties”:{“city”:“Beijing”,“age”:“P.gte(20)”}}]
The resource definition means: a vertex of type ‘person’ with the city property set to ‘Beijing’ and the age property greater than or equal to 20.
Request Body
{
"target_name": "all",
"target_graph": "hugegraph",
"target_url": "127.0.0.1:8080",
@@ -5664,8 +5648,8 @@
}
]
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/targets
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/targets
+
Response Status
201
Response Body
{
"target_creator": "admin",
"target_name": "all",
@@ -5682,11 +5666,11 @@
"id": "-77:all",
"target_update": "2020-11-11 15:32:01.192"
}
-
9.4.2 删除资源
Params
- id: 需要删除的资源 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
-
Response Status
204
+
9.4.2 Delete Resource
Params
- id: Resource Id to be deleted
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
+
Response Status
204
Response Body
1
-
9.4.3 修改资源
Params
- id: 需要修改的资源 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
-
Request Body
修改资源定义中的type
{
+
9.4.3 Modify Resource
Params
- id: Resource Id to be modified
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/targets/-77:gremlin
+
Request Body
Modify the ’type’ in the resource definition.
{
"target_name": "gremlin",
"target_graph": "hugegraph",
"target_url": "127.0.0.1:8080",
@@ -5697,7 +5681,7 @@
]
}
Response Status
200
-
Response Body
返回结果是包含修改过的内容在内的整个用户组对象
{
+
Response Body
The response contains the entire target group object, including the modified content.
{
"target_creator": "admin",
"target_name": "gremlin",
"target_url": "127.0.0.1:8080",
@@ -5713,8 +5697,8 @@
"id": "-77:gremlin",
"target_update": "2020-11-12 09:37:12.780"
}
-
9.4.4 查询资源列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets
-
Response Status
200
+
9.4.4 Query Resource List
Params
- limit: Upper limit of the number of returned results.
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets
+
Response Status
200
Response Body
{
"targets": [
{
@@ -5751,8 +5735,8 @@
}
]
}
-
9.4.5 查询某个资源
Params
- id: 需要查询的资源 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets/-77:grant
-
Response Status
200
+
9.4.5 Query a Specific Resource
Params
- id: Id of the resource to query
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/targets/-77:grant
+
Response Status
200
Response Body
{
"target_creator": "admin",
"target_name": "grant",
@@ -5769,12 +5753,12 @@
"id": "-77:grant",
"target_update": "2020-11-11 15:43:24.841"
}
-
9.5 关联角色(Belong)API
关联用户和用户组的关系,一个用户可以关联一个或者多个用户组。用户组拥有相关资源的权限,不同用户组的资源权限可以理解为不同的角色。即给用户关联角色。
关联角色接口包括:用户关联角色的创建、删除、修改和查询。
9.5.1 创建用户的关联角色
Params
- user: 用户 Id
- group: 用户组 Id
- belong_description: 描述
Request Body
{
+
9.5 Association of Roles (Belong) API
The association between users and user groups allows a user to be associated with one or more user groups. User groups have permissions for related resources, and the permissions for different user groups can be understood as different roles. In other words, users are associated with roles.
The API for associating roles includes creating, deleting, modifying, and querying the association of roles for users.
9.5.1 Create an Association of Roles for a User
Params
- user: User ID
- group: User group ID
- belong_description: Description
Request Body
{
"user": "-63:boss",
"group": "-69:all"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/belongs
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/belongs
+
Response Status
201
Response Body
{
"belong_create": "2020-11-11 16:19:35.422",
"belong_creator": "admin",
@@ -5783,15 +5767,15 @@
"user": "-63:boss",
"group": "-69:all"
}
-
9.5.2 删除关联角色
Params
- id: 需要删除的关联角色 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
-
Response Status
204
+
9.5.2 Delete an Association of Roles
Params
- id: ID of the association of roles to delete
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
+
Response Status
204
Response Body
1
-
9.5.3 修改关联角色
关联角色只能修改描述,不能修改 user 和 group 属性,如果需要修改关联角色,需要删除原来关联关系,新增关联角色。
Params
- id: 需要修改的关联角色 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
-
Request Body
修改belong_description
{
+
9.5.3 Modify an Association of Roles
An association of roles can only be modified for its description. The user
and group
properties cannot be modified. If you need to modify an association of roles, you need to delete the existing association and create a new one.
Params
- id: ID of the association of roles to modify
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:grant
+
Request Body
Modify the belong_description
field
{
"belong_description": "update test"
}
Response Status
200
-
Response Body
返回结果是包含修改过的内容在内的整个用户组对象
{
+
Response Body
The response includes the modified content as well as the entire association of roles object
{
"belong_description": "update test",
"belong_create": "2020-11-12 10:40:21.720",
"belong_creator": "admin",
@@ -5800,8 +5784,8 @@
"user": "-63:boss",
"group": "-69:grant"
}
-
9.5.4 查询关联角色列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs
-
Response Status
200
+
9.5.4 Query List of Associations of Roles
Params
- limit: Upper limit on the number of results to return
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs
+
Response Status
200
Response Body
{
"belongs": [
{
@@ -5814,8 +5798,8 @@
}
]
}
-
9.5.5 查看某个关联角色
Params
- id: 需要查询的关联角色 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all
-
Response Status
200
+
9.5.5 View a Specific Association of Roles
Params
- id: The id of the association of roles to be queried
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/belongs/S-63:boss>-82>>S-69:all
+
Response Status
200
Response Body
{
"belong_create": "2020-11-11 16:19:35.422",
"belong_creator": "admin",
@@ -5824,13 +5808,14 @@
"user": "-63:boss",
"group": "-69:all"
}
-
9.6 赋权(Access)API
给用户组赋予资源的权限,主要包含:读操作(READ)、写操作(WRITE)、删除操作(DELETE)、执行操作(EXECUTE)等。
赋权接口包括:赋权的创建、删除、修改和查询。
9.6.1 创建赋权(用户组赋予资源的权限)
Params
- group: 用户组 Id
- target: 资源 Id
- access_permission: 权限许可
- access_description: 赋权描述
access_permission:
- READ:读操作,所有的查询,包括查询Schema、查顶点/边,查询顶点和边的数量VERTEX_AGGR/EDGE_AGGR,也包括读图的状态STATUS、变量VAR、任务TASK等;
- WRITE:写操作,所有的创建、更新操作,包括给Schema增加property key,给顶点增加或更新属性等;
- DELETE:删除操作,包括删除元数据、删除顶点/边;
- EXECUTE:执⾏操作,包括执⾏Gremlin语句、执⾏Task、执⾏metadata函数;
Request Body
{
+
9.6 Authorization (Access) API
Grant permissions to user groups for resources, including operations such as READ, WRITE, DELETE, EXECUTE, etc.
+The authorization API includes: creating, deleting, modifying, and querying permissions.
9.6.1 Create Authorization (Granting permissions to user groups for resources)
Params
- group: Group ID
- target: Resource ID
- access_permission: Permission grant
- access_description: Authorization description
Access permissions:
- READ: Read operations, including all queries such as querying the schema, retrieving vertices/edges, aggregating vertex and edge counts (VERTEX_AGGR/EDGE_AGGR), and reading the graph’s status (STATUS), variables (VAR), tasks (TASK), etc.
- WRITE: Write operations, including creating and updating operations, such as adding property keys to the schema or adding/updating properties of vertices.
- DELETE: Delete operations, including deleting metadata, vertices, or edges.
- EXECUTE: Execute operations, including executing Gremlin queries, executing tasks, and executing metadata functions.
Request Body
{
"group": "-69:all",
"target": "-77:all",
"access_permission": "READ"
}
-
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/accesses
-
Response Status
201
+
Method & Url
POST http://localhost:8080/graphs/hugegraph/auth/accesses
+
Response Status
201
Response Body
{
"access_permission": "READ",
"access_create": "2020-11-11 15:54:54.008",
@@ -5840,15 +5825,15 @@
"group": "-69:all",
"target": "-77:all"
}
-
9.6.2 删除赋权
Params
- id: 需要删除的赋权 Id
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
-
Response Status
204
+
9.6.2 Delete Authorization
Params
- id: The ID of the authorization to be deleted
Method & Url
DELETE http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
+
Response Status
204
Response Body
1
-
9.6.3 修改赋权
赋权只能修改描述,不能修改用户组、资源和权限许可,如果需要修改赋权的关系,可以删除原来的赋权关系,新增赋权。
Params
- id: 需要修改的赋权 Id
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
-
Request Body
修改access_description
{
+
9.6.3 Modify Authorization
Authorization can only be modified for its description. User group, resource, and permission cannot be modified. If you need to modify the relationship of the authorization, you can delete the original authorization relationship and create a new one.
Params
- id: The ID of the authorization to be modified
Method & Url
PUT http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>12>S-77:all
+
Request Body
Modify access_description
{
"access_description": "test"
}
Response Status
200
-
Response Body
返回结果是包含修改过的内容在内的整个用户组对象
{
+
Response Body
Return Result Including Modified Content of the Entire User Group Object
{
"access_description": "test",
"access_permission": "WRITE",
"access_create": "2020-11-12 10:12:03.074",
@@ -5858,8 +5843,8 @@
"group": "-69:all",
"target": "-77:all"
}
-
9.6.4 查询赋权列表
Params
- limit: 返回结果条数的上限
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses
-
Response Status
200
+
9.6.4 Query Authorization List
Params
- limit: The maximum number of results to return
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses
+
Response Status
200
Response Body
{
"accesses": [
{
@@ -5873,8 +5858,8 @@
}
]
}
-
9.6.5 查询某个赋权
Params
- id: 需要查询的赋权 Id
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>11>S-77:all
-
Response Status
200
+
9.6.5 Query a Specific Authorization
Params
- id: The ID of the authorization to be queried
Method & Url
GET http://localhost:8080/graphs/hugegraph/auth/accesses/S-69:all>-88>11>S-77:all
+
Response Status
200
Response Body
{
"access_permission": "READ",
"access_create": "2020-11-11 15:54:54.008",
@@ -5884,8 +5869,8 @@
"group": "-69:all",
"target": "-77:all"
}
-
5.1.16 - Other API
10.1 Other
10.1.1 查看HugeGraph的版本信息
Method & Url
GET http://localhost:8080/versions
-
Response Status
200
+
5.1.16 - Other API
10.1 Other
10.1.1 View Version Information of HugeGraph
Method & Url
GET http://localhost:8080/versions
+
Response Status
200
Response Body
{
"versions": {
"version": "v1",
@@ -6199,40 +6184,45 @@
gremlin> :> @script
==>6
-
For more information on the use of gremlin-console, please refer to Tinkerpop Official Website
6 - GUIDES
6.1 - HugeGraph Architecture Overview
1 概述
作为一款通用的图数据库产品,HugeGraph需具备图数据的基本功能,如下图所示。HugeGraph包括三个层次的功能,分别是存储层、计算层和用户接口层。 HugeGraph支持OLTP和OLAP两种图计算类型,其中OLTP实现了Apache TinkerPop3框架,并支持Gremlin查询语言。 OLAP计算是基于SparkGraphX实现。
2 组件
HugeGraph的主要功能分为HugeCore、ApiServer、HugeGraph-Client、HugeGraph-Loader和HugeGraph-Studio等组件构成,各组件之间的通信关系如下图所示。
- HugeCore :HugeGraph的核心模块,TinkerPop的接口主要在该模块中实现。HugeCore的功能涵盖包括OLTP和OLAP两个部分。
- ApiServer :提供RESTFul Api接口,对外提供Graph Api、Schema Api和Gremlin Api等接口服务。
- HugeGraph-Client:基于Java客户端驱动程序。HugeGraph-Client是Java版本客户端驱动程序,后续可根据需要提供Python、Go、C++等多语言支持。
- HugeGraph-Loader:数据导入模块。HugeGraph-Loader可以扫描并分析现有数据,自动生成Graph Schema创建语言,通过批量方式快速导入数据。
- HugeGraph-Studio:基于Web的可视化IDE环境。以Notebook方式记录Gremlin查询,可视化展示Graph的关联关系。HugeGraph-Studio也是本系统推荐的工具。
- HugeGraph-Computer:HugeGraph-Computer是一个分布式图处理系统 (OLAP)。
6.2 - HugeGraph Design Concepts
1. Property Graph
常见的图数据表示模型有两种,分别是RDF(Resource Description Framework)模型和属性图(Property Graph)模型。
-RDF和Property Graph都是最基础、最有名的图表示模式,都能够表示各种图的实体关系建模。
-RDF是W3C标准,而Property Graph是工业标准,受到广大图数据库厂商的广泛支持。HugeGraph目前采用Property Graph。
HugeGraph对应的存储概念模型也是参考Property Graph而设计的,具体示例详见下图:(此图为旧版设计已过时,请忽略它,后续更新)
在HugeGraph内部,每个顶点 / 边由唯一的 VertexId / EdgeId 标识,属性存储在对应点 / 边内部。而顶点与顶点之间的关系 / 映射则是通过边来存储的。
顶点属性值通过边指针方式存储时,如果要更新一个顶点特定的属性值直接通过覆盖写入即可,其弊端是冗余存储了VertexId;
-如果要更新关系的属性需要通过read-and-modify方式,先读取所有属性,修改部分属性,然后再写入存储系统,更新效率较低。
-从经验来看顶点属性的修改需求较多,而边的属性修改需求较少,例如PageRank和Graph Cluster等计算都需要频繁修改顶点的属性值。
2. 图分区方案
对于分布式图数据库而言,图的分区存储方式有两种:分别是边分割存储(Edge Cut)和点分割存储(Vertex Cut),如下图所示。
-使用Edge Cut方式存储图时,任何一个顶点只会出现在一台机器上,而边可能分布在不同机器上,这种存储方式有可能导致边多次存储。
-使用Vertex Cut方式存储图时,任何一条边只会出现在一台机器上,而每相同的一个点可能分布到不同机器上,这种存储方式可能会导致顶点多次存储。
采用EdgeCut分区方案可以支持高性能的插入和更新操作,而VertexCut分区方案更适合静态图查询分析,因此EdgeCut适合OLTP图查询,VertexCut更适合OLAP的图查询。
-HugeGraph目前采用EdgeCut的分区方案。
3. VertexId 策略
HugeGraph的Vertex支持三种ID策略,在同一个图数据库中不同的VertexLabel可以使用不同的Id策略,目前HugeGraph支持的Id策略分别是:
- 自动生成(AUTOMATIC):使用Snowflake算法自动生成全局唯一Id,Long类型;
- 主键(PRIMARY_KEY):通过VertexLabel+PrimaryKeyValues生成Id,String类型;
- 自定义(CUSTOMIZE_STRING|CUSTOMIZE_NUMBER):用户自定义Id,分为String和Long类型两种,需自己保证Id的唯一性;
默认的Id策略是AUTOMATIC,如果用户调用primaryKeys()方法并设置了正确的PrimaryKeys,则自动启用PRIMARY_KEY策略。
-启用PRIMARY_KEY策略后HugeGraph能根据PrimaryKeys实现数据去重。
- AUTOMATIC ID策略
schema.vertexLabel("person")
+
For more information on the use of gremlin-console, please refer to Tinkerpop Official Website
6 - GUIDES
6.1 - HugeGraph Architecture Overview
1 Overview
As a general-purpose graph database product, HugeGraph needs to have the basic functions of graph data, as shown in the figure below. HugeGraph includes three levels of functions, namely storage layer, computing layer and user interface layer. HugeGraph supports two types of graph computing, OLTP and OLAP. OLTP implements the Apache TinkerPop3 framework and supports the Gremlin query language. OLAP computing is implemented based on SparkGraphX.
2 components
The main functions of HugeGraph are divided into components such as HugeCore, ApiServer, HugeGraph-Client, HugeGraph-Loader and HugeGraph-Studio. The communication relationship between each component is shown in the figure below.
- HugeCore: The core module of HugeGraph, the interface of TinkerPop is mainly implemented in this module. The function of HugeCore includes two parts: OLTP and OLAP.
- ApiServer: Provides RESTFul Api interface, and provides external interface services such as Graph Api, Schema Api, and Gremlin Api.
- HugeGraph-Client: Java-based client driver. HugeGraph-Client is a Java version client driver, which can provide Python, Go, C++ and other multi-language support as needed.
- HugeGraph-Loader: data import module. HugeGraph-Loader can scan and analyze existing data, automatically generate Graph Schema creation language, and quickly import data in batches.
- HugeGraph-Studio: Web-based visual IDE environment. Record Gremlin queries in Notebook mode, and visualize the relationship between Graphs. HugeGraph-Studio is also a tool recommended by this system.
- HugeGraph-Computer: HugeGraph-Computer is a distributed graph processing system (OLAP).
6.2 - HugeGraph Design Concepts
1. Property Graph
There are two common graph data representation models, namely the RDF (Resource Description Framework) model and the Property Graph (Property Graph) model.
+Both RDF and Property Graph are the most basic and well-known graph representation modes, and both can represent entity-relationship modeling of various graphs.
+RDF is a W3C standard, while Property Graph is an industry standard and is widely supported by graph database vendors. HugeGraph currently uses Property Graph.
The storage concept model corresponding to HugeGraph is also designed with reference to Property Graph. For specific examples, see the figure below:
+( This figure is outdated for the old version design, please ignore it and update it later )
Inside HugeGraph, each vertex/edge is identified by a unique VertexId/EdgeId, and the attributes are stored inside the corresponding vertex/edge.
+The relationship/mapping between vertices is stored through edges.
When the vertex attribute value is stored by edge pointer, if you want to update a vertex-specific attribute value, you can directly write it by overwriting.
+The disadvantage is that the VertexId is redundantly stored; if you want to update the attribute of the relationship, you need to use the read-and-modify method ,
+read all attributes first, modify some attributes, and then write to the storage system, the update efficiency is low. According to experience, there are more
+requirements for modifying vertex attributes, but less for edge attributes. For example, calculations such as PageRank and Graph Cluster require frequent
+modification of vertex attribute values.
2. Graph Partition Scheme
For distributed graph databases, there are two partition storage methods for graphs: Edge Cut and Vertex Cut, as shown in the following figure. When using the
+Edge Cut method to store graphs, any vertex will only appear on one machine, while edges may be distributed on different machines. This storage method may lead
+to multiple storage of edges. When using the Vertex Cut method to store graphs, any edge will only appear on one machine, and each same point may be distributed
+to different machines. This storage method may result in multiple storage of vertices.
The EdgeCut partition scheme can support high-performance insert and update operations, while the VertexCut partition scheme is more suitable for static graph query
+analysis, so EdgeCut is suitable for OLTP graph query, and VertexCut is more suitable for OLAP graph query. HugeGraph currently adopts the partition scheme of EdgeCut.
3. VertexId Strategy
Vertex of HugeGraph supports three ID strategies. Different VertexLabels in the same graph database can use different Id strategies. Currently, the Id strategies
+supported by HugeGraph are:
- Automatic generation (AUTOMATIC): Use the Snowflake algorithm to automatically generate a globally unique Id, Long type;
- Primary Key (PRIMARY_KEY): Generate Id through VertexLabel+PrimaryKeyValues, String type;
- Custom (CUSTOMIZE_STRING|CUSTOMIZE_NUMBER): User-defined Id, which is divided into two types: String and Long, and you need to ensure the uniqueness of the Id yourself;
The default Id policy is AUTOMATIC, if the user calls the primaryKeys() method and sets the correct PrimaryKeys, the PRIMARY_KEY policy is automatically enabled.
+After enabling the PRIMARY_KEY strategy, HugeGraph can implement data deduplication based on PrimaryKeys.
- AUTOMATIC ID Policy
schema.vertexLabel("person")
.useAutomaticId()
.properties("name", "age", "city")
.create();
graph.addVertex(T.label, "person","name", "marko", "age", 18, "city", "Beijing");
-
- PRIMARY_KEY ID策略
schema.vertexLabel("person")
+
- PRIMARY_KEY ID policy
schema.vertexLabel("person")
.usePrimaryKeyId()
.properties("name", "age", "city")
.primaryKeys("name", "age")
.create();
graph.addVertex(T.label, "person","name", "marko", "age", 18, "city", "Beijing");
-
- CUSTOMIZE_STRING ID策略
schema.vertexLabel("person")
+
- CUSTOMIZE_STRING ID Policy
schema.vertexLabel("person")
.useCustomizeStringId()
.properties("name", "age", "city")
.create();
graph.addVertex(T.label, "person", T.id, "123456", "name", "marko","age", 18, "city", "Beijing");
-
- CUSTOMIZE_NUMBER ID策略
schema.vertexLabel("person")
+
- CUSTOMIZE_NUMBER ID Policy
schema.vertexLabel("person")
.useCustomizeNumberId()
.properties("name", "age", "city")
.create();
graph.addVertex(T.label, "person", T.id, 123456, "name", "marko","age", 18, "city", "Beijing");
-
如果用户需要Vertex去重,有三种方案分别是:
- 采用PRIMARY_KEY策略,自动覆盖,适合大数据量批量插入,用户无法知道是否发生了覆盖行为
- 采用AUTOMATIC策略,read-and-modify,适合小数据量插入,用户可以明确知道是否发生覆盖
- 采用CUSTOMIZE_STRING或CUSTOMIZE_NUMBER策略,用户自己保证唯一
4. EdgeId 策略
HugeGraph的EdgeId是由srcVertexId
+edgeLabel
+sortKey
+tgtVertexId
四部分组合而成。其中sortKey
是HugeGraph的一个重要概念。
-在Edge中加入sortKey
作为Edge的唯一标识的原因有两个:
- 如果两个顶点之间存在多条相同Label的边可通过
sortKey
来区分 - 对于SuperNode的节点,可以通过
sortKey
来排序截断。
由于EdgeId是由srcVertexId
+edgeLabel
+sortKey
+tgtVertexId
四部分组合,多次插入相同的Edge时HugeGraph会自动覆盖以实现去重。
-需要注意的是如果批量插入模式下Edge的属性也将会覆盖。
另外由于HugeGraph的EdgeId采用自动去重策略,对于self-loop(一个顶点存在一条指向自身的边)的情况下HugeGraph认为仅有一条边,对于采用AUTOMATIC策略的图数据库(例如TitianDB
-)则会认为该图存在两条边。
HugeGraph的边仅支持有向边,无向边可以创建Out和In两条边来实现。
5. HugeGraph transaction overview
TinkerPop事务概述
TinkerPop transaction事务是指对数据库执行操作的工作单元,一个事务内的一组操作要么执行成功,要么全部失败。
-详细介绍请参考TinkerPop官方文档:http://tinkerpop.apache.org/docs/current/reference/#transactions
TinkerPop事务操作接口
- open 打开事务
- commit 提交事务
- rollback 回滚事务
- close 关闭事务
TinkerPop事务规范
- 事务必须显式提交后才可生效(未提交时修改操作只有本事务内查询可看到)
- 事务必须打开之后才可提交或回滚
- 如果事务设置自动打开则无需显式打开(默认方式),如果设置手动打开则必须显式打开
- 可设置事务关闭时:自动提交、自动回滚(默认方式)、手动(禁止显式关闭)等3种模式
- 事务在提交或回滚后必须是关闭状态
- 事务在查询后必须是打开状态
- 事务(非threaded tx)必须线程隔离,多线程操作同一事务互不影响
更多事务规范用例见:Transaction Test
HugeGraph事务实现
- 一个事务中所有的操作要么成功要么失败
- 一个事务只能读取到另外一个事务已提交的内容(Read committed)
- 所有未提交的操作均能在本事务中查询出来,包括:
- 增加顶点能够查询出该顶点
- 删除顶点能够过滤掉该顶点
- 删除顶点能够过滤掉该顶点相关边
- 增加边能够查询出该边
- 删除边能够过滤掉该边
- 增加/修改(顶点、边)属性能够在查询时生效
- 删除(顶点、边)属性能够在查询时生效
- 所有未提交的操作在事务回滚后均失效,包括:
- 顶点、边的增加、删除
- 属性的增加/修改、删除
示例:一个事务无法读取另一个事务未提交的内容
static void testUncommittedTx(final HugeGraph graph) throws InterruptedException {
+
If users need Vertex deduplication, there are three options:
- Adopt PRIMARY_KEY strategy, automatic overwriting, suitable for batch insertion of large amount of data, users cannot know whether overwriting has occurred
- Adopt AUTOMATIC strategy, read-and-modify, suitable for small data insertion, users can clearly know whether overwriting occurs
- Using the CUSTOMIZE_STRING or CUSTOMIZE_NUMBER strategy, the user guarantees the uniqueness
4. EdgeId policy
The EdgeId of HugeGraph is composed of srcVertexId
+ edgeLabel
+ sortKey
+ tgtVertexId
. Among them sortKey
is an important concept of HugeGraph.
+There are two reasons for adding Edge sortKeyas the unique ID of Edge:
- If there are multiple edges of the same Label between two vertices, they can be sortKeydistinguished by
- For SuperNode nodes, it can be sortKeysorted and truncated by.
Since EdgeId is composed of srcVertexId
+ edgeLabel
+ sortKey
+ tgtVertexId
, HugeGraph will automatically overwrite when the same Edge is inserted
+multiple times to achieve deduplication. It should be noted that the properties of Edge will also be overwritten in the batch insert mode.
In addition, because HugeGraph’s EdgeId adopts an automatic deduplication strategy, HugeGraph considers that there is only one edge in the case of self-loop
+(a vertex has an edge pointing to itself). The graph has two edges.
The edges of HugeGraph only support directed edges, and undirected edges can be realized by creating two edges, Out and In.
5. HugeGraph transaction overview
TinkerPop transaction overview
A TinkerPop transaction refers to a unit of work that performs operations on the database. A set of operations within a transaction either succeeds or all fail. For a detailed introduction, please refer to the official documentation of TinkerPop: http://tinkerpop.apache.org/docs/current/reference/#transactions:http://tinkerpop.apache.org/docs/current/reference/#transactions
TinkerPop transaction overview
- open open transaction
- commit commit transaction
- rollback rollback transaction
- close closes the transaction
TinkerPop transaction specification
- The transaction must be explicitly committed before it can take effect (the modification operation can only be seen by the query in this transaction if it is not committed)
- A transaction must be opened before it can be committed or rolled back
- If the transaction setting is automatically turned on, there is no need to explicitly turn it on (the default method), if it is set to be turned on manually, it must be turned on explicitly
- When the transaction is closed, you can set three modes: automatic commit, automatic rollback (default mode), manual (explicit shutdown is prohibited), etc.
- The transaction must be closed after committing or rolling back
- The transaction must be open after the query
- Transactions (non-threaded tx) must be thread-isolated, and multi-threaded operations on the same transaction do not affect each other
For more transaction specification use cases, see: Transaction Test
HugeGraph transaction implementation
- All operations in a transaction either succeed or fail
- A transaction can only read what has been committed by another transaction (Read committed)
- All uncommitted operations can be queried in this transaction, including:
- Adding a vertex can query the vertex
- Delete a vertex to filter out the vertex
- Deleting a vertex can filter out the related edges of the vertex
- Adding an edge can query the edge
- Delete edge can filter out the edge
- Adding/modifying (vertex, edge) attributes can take effect when querying
- Delete (vertex, edge) attributes can take effect at query time
- All uncommitted operations become invalid after the transaction is rolled back, including:
- Adding and deleting vertices and edges
- Addition/modification, deletion of attributes
Example: One transaction cannot read another transaction’s uncommitted content
static void testUncommittedTx(final HugeGraph graph) throws InterruptedException {
final CountDownLatch latchUncommit = new CountDownLatch(1);
final CountDownLatch latchRollback = new CountDownLatch(1);
@@ -6283,7 +6273,10 @@
assert !graph.vertices().hasNext();
assert !graph.edges().hasNext();
}
-
事务实现原理
- 服务端内部通过将事务与线程绑定实现隔离(ThreadLocal)
- 本事务未提交的内容按照时间顺序覆盖老数据以供本事务查询最新版本数据
- 底层依赖后端数据库保证事务原子性操作(如Cassandra/RocksDB的batch接口均保证原子性)
注意
RESTful API暂时未暴露事务接口
TinkerPop API允许打开事务,请求完成时会自动关闭(Gremlin Server强制关闭)
6.3 - HugeGraph Plugin机制及插件扩展流程
背景
- HugeGraph不仅开源开放,而且要做到简单易用,一般用户无需更改源码也能轻松增加插件扩展功能。
- HugeGraph支持多种内置存储后端,也允许用户无需更改现有源码的情况下扩展自定义后端。
- HugeGraph支持全文检索,全文检索功能涉及到各语言分词,目前已内置8种中文分词器,也允许用户无需更改现有源码的情况下扩展自定义分词器。
可扩展维度
目前插件方式提供如下几个维度的扩展项:
- 后端存储
- 序列化器
- 自定义配置项
- 分词器
插件实现机制
- HugeGraph提供插件接口HugeGraphPlugin,通过Java SPI机制支持插件化
- HugeGraph提供了4个扩展项注册函数:
registerOptions()
、registerBackend()
、registerSerializer()
、registerAnalyzer()
- 插件实现者实现相应的Options、Backend、Serializer或Analyzer的接口
- 插件实现者实现HugeGraphPlugin接口的
register()
方法,在该方法中注册上述第3点所列的具体实现类,并打成jar包 - 插件使用者将jar包放在HugeGraph Server安装目录的
plugins
目录下,修改相关配置项为插件自定义值,重启即可生效
插件实现流程实例
1 新建一个maven项目
1.1 项目名称取名:hugegraph-plugin-demo
1.2 添加hugegraph-core
Jar包依赖
maven pom.xml详细内容如下:
<?xml version="1.0" encoding="UTF-8"?>
+
Principle of transaction realization
- The server internally realizes isolation by binding transactions to threads (ThreadLocal)
- The uncommitted content of this transaction overwrites the old data in chronological order for this transaction to query the latest version of data
- The bottom layer relies on the back-end database to ensure transaction atomicity (for example, the batch interface of Cassandra/RocksDB guarantees atomicity)
Notice
The RESTful API does not expose the transaction interface for the time being
TinkerPop API allows open transactions, which are automatically closed when the request is completed (Gremlin Server forces close)
6.3 - HugeGraph Plugin mechanism and plug-in extension process
Background
- HugeGraph is not only open source and open, but also simple and easy to use. General users can easily add plug-in extension functions without changing the source code.
- HugeGraph supports a variety of built-in storage backends, and also allows users to extend custom backends without changing the existing source code.
- HugeGraph supports full-text search. The full-text search function involves word segmentation in various languages. Currently, there are 8 built-in Chinese word
+breakers, and it also allows users to expand custom word breakers without changing the existing source code.
Scalable dimension
Currently, the plug-in method provides extensions in the following dimensions:
- backend storage
- serializer
- Custom configuration items
- tokenizer
Plug-in implementation mechanism
- HugeGraph provides a plug-in interface HugeGraphPlugin, which supports plug-in through the Java SPI mechanism
- HugeGraph provides four extension registration functions: registerOptions(), registerBackend(), registerSerializer(),registerAnalyzer()
- The plug-in implementer implements the corresponding Options, Backend, Serializer or Analyzer interface
- The plug-in implementer implements register()the method of the HugeGraphPlugin interface, registers the specific
+implementation class listed in the above point 3 in this method, and packs it into a jar package
- The plug-in user puts the jar package in the HugeGraph Server installation directory plugins, modifies the relevant
+configuration items to the plug-in custom value, and restarts to take effect
Plug-in implementation process example
1 Create a new maven project
1.1 Name the project name: hugegraph-plugin-demo
1.2 Add hugegraph-core
Jar package dependencies
The details of maven pom.xml are as follows:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
@@ -6305,7 +6298,7 @@
</dependency>
</dependencies>
</project>
-
2 实现扩展功能
2.1 扩展自定义后端
2.1.1 实现接口BackendStoreProvider
- 可实现接口:
com.baidu.hugegraph.backend.store.BackendStoreProvider
- 或者继承抽象类:
com.baidu.hugegraph.backend.store.AbstractBackendStoreProvider
以RocksDB后端RocksDBStoreProvider为例:
public class RocksDBStoreProvider extends AbstractBackendStoreProvider {
+
2 Realize extended functions
2.1 Extending a custom backend
2.1.1 Implement the interface BackendStoreProvider
- Realizable interfaces:
com.baidu.hugegraph.backend.store.BackendStoreProvider
- Or inherit an abstract class:
com.baidu.hugegraph.backend.store.AbstractBackendStoreProvider
Take the RocksDB backend RocksDBStoreProvider as an example:
public class RocksDBStoreProvider extends AbstractBackendStoreProvider {
protected String database() {
return this.graph().toLowerCase();
@@ -6331,7 +6324,7 @@
return "1.0";
}
}
-
2.1.2 实现接口BackendStore
BackendStore接口定义如下:
public interface BackendStore {
+
2.1.2 Implement interface BackendStore
The BackendStore interface is defined as follows:
public interface BackendStore {
// Store name
public String store();
@@ -6369,8 +6362,8 @@
// Generate an id for a specific type
public Id nextId(HugeType type);
}
-
2.1.3 扩展自定义序列化器
序列化器必须继承抽象类:com.baidu.hugegraph.backend.serializer.AbstractSerializer
(implements GraphSerializer, SchemaSerializer
)
-主要接口的定义如下:
public interface GraphSerializer {
+
2.1.3 Extending custom serializers
The serializer must inherit the abstract class: com.baidu.hugegraph.backend.serializer.AbstractSerializer
+( implements GraphSerializer, SchemaSerializer
) The main interface is defined as follows:
public interface GraphSerializer {
public BackendEntry writeVertex(HugeVertex vertex);
public BackendEntry writeVertexProperty(HugeVertexProperty<?> prop);
public HugeVertex readVertex(HugeGraph graph, BackendEntry entry);
@@ -6393,7 +6386,7 @@
public BackendEntry writeIndexLabel(IndexLabel indexLabel);
public IndexLabel readIndexLabel(HugeGraph graph, BackendEntry entry);
}
-
2.1.4 扩展自定义配置项
增加自定义后端时,可能需要增加新的配置项,实现流程主要包括:
- 增加配置项容器类,并实现接口
com.baidu.hugegraph.config.OptionHolder
- 提供单例方法
public static OptionHolder instance()
,并在对象初始化时调用方法OptionHolder.registerOptions()
- 增加配置项声明,单值配置项类型为
ConfigOption
、多值配置项类型为ConfigListOption
以RocksDB配置项定义为例:
public class RocksDBOptions extends OptionHolder {
+
2.1.4 Extend custom configuration items
When adding a custom backend, it may be necessary to add new configuration items. The implementation process mainly includes:
- Add a configuration item container class and implement the interface
com.baidu.hugegraph.config.OptionHolder
- Provide a singleton method
public static OptionHolder instance()
, and call the method when the object is initialized OptionHolder.registerOptions()
- Add configuration item declaration, single-value configuration item type is
ConfigOption
, multi-value configuration item type is ConfigListOption
Take the RocksDB configuration item definition as an example:
public class RocksDBOptions extends OptionHolder {
private RocksDBOptions() {
super();
@@ -6438,7 +6431,7 @@
ImmutableList.of()
);
}
-
2.2 扩展自定义分词器
分词器需要实现接口com.baidu.hugegraph.analyzer.Analyzer
,以实现一个SpaceAnalyzer空格分词器为例。
package com.baidu.hugegraph.plugin;
+
2.2 Extend custom tokenizer
The tokenizer needs to implement the interface com.baidu.hugegraph.analyzer.Analyzer
, take implementing a SpaceAnalyzer space tokenizer as an example.
package com.baidu.hugegraph.plugin;
import java.util.Arrays;
import java.util.HashSet;
@@ -6453,8 +6446,8 @@
return new HashSet<>(Arrays.asList(text.split(" ")));
}
}
-
3. 实现插件接口,并进行注册
插件注册入口为HugeGraphPlugin.register()
,自定义插件必须实现该接口方法,在其内部注册上述定义好的扩展项。
-接口com.baidu.hugegraph.plugin.HugeGraphPlugin
定义如下:
public interface HugeGraphPlugin {
+
3. Implement the plug-in interface and register it
The plug-in registration entry is HugeGraphPlugin.register()
, the custom plug-in must implement this interface method, and register the extension
+items defined above inside it. The interface com.baidu.hugegraph.plugin.HugeGraphPlugin
is defined as follows:
public interface HugeGraphPlugin {
public String name();
@@ -6464,7 +6457,7 @@
public String supportsMaxVersion();
}
-
并且HugeGraphPlugin提供了4个静态方法用于注册扩展项:
- registerOptions(String name, String classPath):注册配置项
- registerBackend(String name, String classPath):注册后端(BackendStoreProvider)
- registerSerializer(String name, String classPath):注册序列化器
- registerAnalyzer(String name, String classPath):注册分词器
下面以注册SpaceAnalyzer分词器为例:
package com.baidu.hugegraph.plugin;
+
And HugeGraphPlugin provides 4 static methods for registering extensions:
- registerOptions(String name, String classPath):register configuration items
- registerBackend(String name, String classPath):register backend (BackendStoreProvider)
- registerSerializer(String name, String classPath):register serializer
- registerAnalyzer(String name, String classPath):register tokenizer
The following is an example of registering the SpaceAnalyzer tokenizer:
package com.baidu.hugegraph.plugin;
public class DemoPlugin implements HugeGraphPlugin {
@@ -6478,34 +6471,35 @@
HugeGraphPlugin.registerAnalyzer("demo", SpaceAnalyzer.class.getName());
}
}
-
4. 配置SPI入口
- 确保services目录存在:hugegraph-plugin-demo/resources/META-INF/services
- 在services目录下建立文本文件:com.baidu.hugegraph.plugin.HugeGraphPlugin
- 文件内容如下:com.baidu.hugegraph.plugin.DemoPlugin
5. 打Jar包
通过maven打包,在项目目录下执行命令mvn package
,在target目录下会生成Jar包文件。
-使用时将该Jar包拷到plugins
目录,重启服务即可生效。
6.4 - Backup Restore
描述
Backup 和 Restore 是备份图和恢复图的功能。备份和恢复的数据包括元数据(schema)和图数据(vertex 和 edge)。
Backup
将 HugeGraph 系统中的一张图的元数据和图数据以 JSON 格式导出。
Restore
将 Backup 导出的JSON格式的数据,重新导入到 HugeGraph 系统中的一个图中。
Restore 有两种模式:
- Restoring 模式,将 Backup 导出的元数据和图数据原封不动的恢复到 HugeGraph 系统中。可用于图的备份和恢复,一般目标图是新图(没有元数据和图数据)。比如:
- 系统升级,先备份图,然后升级系统,最后将图恢复到新的系统中
- 图迁移,从一个 HugeGraph 系统中,使用 Backup 功能将图导出,然后使用 Restore 功能将图导入另一个 HugeGraph 系统中
- Merging 模式,将 Backup 导出的元数据和图数据导入到另一个已经存在元数据或者图数据的图中,过程中元数据的 ID 可能发生改变,顶点和边的 ID 也会发生相应变化。
- 可用于合并图
使用方法
可以使用hugegraph-tools进行图的备份和恢复。
Backup
bin/hugegraph backup -t all -d data
-
该命令将 http://127.0.0.1 的 hugegraph 图的全部元数据和图数据备份到data目录下。
Backup 在三种图模式下都可以正常工作
Restore
Restore 有两种模式: RESTORING 和 MERGING,备份之前首先要根据需要设置图模式。
步骤1:查看并设置图模式
bin/hugegraph graph-mode-get
-
该命令用于查看当前图模式,包括:NONE、RESTORING、MERGING。
bin/hugegraph graph-mode-set -m RESTORING
-
该命令用于设置图模式,Restore 之前可以设置成 RESTORING 或者 MERGING 模式,例子中设置成 RESTORING。
步骤2:Restore 数据
bin/hugegraph restore -t all -d data
-
该命令将data目录下的全部元数据和图数据重新导入到 http://127.0.0.1 的 hugegraph 图中。
步骤3:恢复图模式
bin/hugegraph graph-mode-set -m NONE
-
该命令用于恢复图模式为 NONE。
至此,一次完整的图备份和图恢复流程结束。
帮助
备份和恢复命令的详细使用方式可以参考hugegraph-tools文档。
Backup/Restore使用和实现的API说明
Backup
Backup 使用元数据
和图数据
的相应的 list(GET) API 导出,并未增加新的 API。
Restore
Restore 使用元数据
和图数据
的相应的 create(POST) API 导入,并未增加新的 API。
Restore 时存在两种不同的模式: Restoring 和 Merging,另外,还有常规模式 NONE(默认),区别如下:
- None 模式,元数据和图数据的写入属于正常状态,可参见功能说明。特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
- Restoring 模式,恢复到一个新图中,特别的:
- 元数据(schema)创建时允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
- Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
- 元数据(schema)创建时不允许指定 ID
- 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。
实现的设置图模式的 RESTful API 如下:
查看某个图的模式. 该操作需要管理员权限
Method & Url
GET http://localhost:8080/graphs/{graph}/mode
-
Response Status
200
+
4. Configure SPI entry
- Make sure the services directory exists: hugegraph-plugin-demo/resources/META-INF/services
- Create a text file in the services directory: com.baidu.hugegraph.plugin.HugeGraphPlugin
- The content of the file is as follows: com.baidu.hugegraph.plugin.DemoPlugin
5. Make Jar package
Through maven packaging, execute the command in the project directory mvn package, and a Jar package file will be generated in the
+target directory. Copy the Jar package to the plugins
directory when using it, and restart the service to take effect.
6.4 - Backup and Restore
Description
Backup and Restore are functions of backup map and restore map. The data backed up and restored includes metadata (schema) and graph data (vertex and edge).
Backup
Export the metadata and graph data of a graph in the HugeGraph system in JSON format.
Restore
Re-import the data in JSON format exported by Backup to a graph in the HugeGraph system.
Restore has two modes:
- In Restoring mode, the metadata and graph data exported by Backup are restored to the HugeGraph system intact. It can be used for graph backup and recovery, and the general target graph is a new graph (without metadata and graph data). for example:
- System upgrade, first back up the map, then upgrade the system, and finally restore the map to the new system
- Graph migration, from a HugeGraph system, use the Backup function to export the graph, and then use the Restore function to import the graph into another HugeGraph system
- In the Merging mode, the metadata and graph data exported by Backup are imported into another graph that already has metadata or graph data. During the process, the ID of the metadata may change, and the IDs of vertices and edges will also change accordingly.
- Can be used to merge graphs
Instructions
You can use hugegraph-tools to backup and restore graphs.
Backup
bin/hugegraph backup -t all -d data
+
This command backs up all the metadata and graph data of the hugegraph graph of http://127.0.0.1 to the data directory.
Backup works fine in all three graph modes
Restore
Restore has two modes: RESTORING and MERGING. Before backup, you must first set the graph mode according to your needs.
Step 1: View and set graph mode
bin/hugegraph graph-mode-get
+
This command is used to view the current graph mode, including: NONE, RESTORING, MERGING.
bin/hugegraph graph-mode-set -m RESTORING
+
This command is used to set the graph mode. Before Restore, it can be set to RESTORING or MERGING mode. In the example, it is set to RESTORING.
Step 2: Restore data
bin/hugegraph restore -t all -d data
+
This command re-imports all metadata and graph data in the data directory to the hugegraph graph at http://127.0.0.1.
Step 3: Restoring Graph Mode
bin/hugegraph graph-mode-set -m NONE
+
This command is used to restore the graph mode to NONE.
So far, a complete graph backup and graph recovery process is over.
help
For detailed usage of backup and restore commands, please refer to the hugegraph-tools documentation.
API description for Backup/Restore usage and implementation
Backup
Backup uses the corresponding list(GET) API export of metadata and graph data, and no new API is added.
Restore
Restore uses the corresponding create(POST) API imports for metadata and graph data, and does not add new APIs.
There are two different modes for Restore: Restoring and Merging. In addition, there is a regular mode of NONE (default), the differences are as follows:
- In None mode, the writing of metadata and graph data is normal, please refer to the function description. special:
- ID is not allowed when metadata (schema) is created
- Graph data (vertex) is not allowed to specify an ID when the id strategy is Automatic
- Restoring mode, restoring to a new graph, in particular:
- ID is allowed to be specified when metadata (schema) is created
- Graph data (vertex) allows specifying an ID when the id strategy is Automatic
- Merging mode, merging into a graph with existing metadata and graph data, in particular:
- ID is not allowed when metadata (schema) is created
- Graph data (vertex) allows specifying an ID when the id strategy is Automatic
Normally, the graph mode is None. When you need to restore the graph, you need to temporarily change the graph mode to Restoring mode or
+Merging mode as needed, and when the Restore is completed, restore the graph mode to None.
The implemented RESTful API for setting graph mode is as follows:
View the schema of a graph. This operation requires administrator privileges
Method & Url
GET http://localhost:8080/graphs/{graph}/mode
+
Response Status
200
Response Body
{
"mode": "NONE"
}
-
合法的图模式包括:NONE,RESTORING,MERGING
设置某个图的模式. 该操作需要管理员权限
Method & Url
PUT http://localhost:8080/graphs/{graph}/mode
-
Request Body
"RESTORING"
-
合法的图模式包括:NONE,RESTORING,MERGING
Response Status
200
+
Legal graph modes include: NONE, RESTORING, MERGING
Set the mode of a graph. ““This operation requires administrator privileges**
Method & Url
PUT http://localhost:8080/graphs/{graph}/mode
+
Request Body
"RESTORING"
+
Legal graph modes include: NONE, RESTORING, MERGING
Response Status
200
Response Body
{
"mode": "RESTORING"
}
-
6.5 - FAQ
如何选择后端存储? 选 RocksDB 还是 Cassandra 还是 Hbase 还是 Mysql?
根据你的具体需要来判断, 一般单机或数据量 < 100 亿推荐 RocksDB, 其他推荐使用分布式存储的后端集群
启动服务时提示:xxx (core dumped) xxx
请检查 JDK 版本是否为 Java 11, 至少要求是 Java 8
启动服务成功了,但是操作图时有类似于"无法连接到后端或连接未打开"的提示
第一次启动服务前,需要先使用init-store
初始化后端,后续版本会将提示得更清晰直接。
所有的后端在使用前都需要执行init-store
吗,序列化的选择可以随意填写么?
除了memory
不需要,其他后端均需要,如:cassandra
、hbase
和rocksdb
等,序列化需一一对应不可随意填写。
执行init-store
报错:Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni3226083071221514754.so: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by /tmp/librocksdbjni3226083071221514754.so)
RocksDB需要 gcc 4.3.0 (GLIBCXX_3.4.10) 及以上版本
执行init-store.sh
时报错:NoHostAvailableException
NoHostAvailableException
是指无法连接到Cassandra
服务,如果确定是要使用cassandra
后端,请先安装并启动这个服务。至于这个提示本身可能不够直白,我们会更新到文档进行说明的。
bin
目录下包含start-hugegraph.sh
、start-restserver.sh
和start-gremlinserver.sh
三个似乎与启动有关的脚本,到底该使用哪个
自0.3.3版本以来,已经把 GremlinServer 和 RestServer 合并为 HugeGraphServer 了,使用start-hugegraph.sh
启动即可,后两个在后续版本会被删掉。
配置了两个图,名字是hugegraph
和hugegraph1
,而启动服务的命令是start-hugegraph.sh
,是只打开了hugegraph
这个图吗
start-hugegraph.sh
会打开所有gremlin-server.yaml
的graphs
下的图,这二者并无名字上的直接关系
服务启动成功后,使用curl
查询所有顶点时返回乱码
服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 gunzip
进行解压(curl http://example | gunzip
),也可以用Firefox
的postman
或者Chrome
浏览器的restlet
插件发请求,会自动解压缩响应数据。
使用顶点Id通过RESTful API
查询顶点时返回空,但是顶点确实是存在的
检查顶点Id的类型,如果是字符串类型,API
的url
中的id部分需要加上双引号,数字类型则不用加。
已经根据需要给顶点Id加上了双引号,但是通过RESTful API
查询顶点时仍然返回空
检查顶点id中是否包含+
、空格
、/
、?
、%
、&
和=
这些URL的保留字符,如果存在则需要进行编码。下表给出了编码值:
特殊字符 | 编码值
---------| ----
-+ | %2B
-空格 | %20
-/ | %2F
-? | %3F
-% | %25
-# | %23
-& | %26
-= | %3D
-
查询某一类别的顶点或边(query by label
)时提示超时
由于属于某一label的数据量可能比较多,请加上limit限制。
通过RESTful API
操作图是可以的,但是发送Gremlin
语句就报错:Request Failed(500)
可能是GremlinServer
的配置有误,检查gremlin-server.yaml
的host
、port
是否与rest-server.properties
的gremlinserver.url
匹配,如不匹配则修改,然后重启服务。
使用Loader
导数据出现Socket Timeout
异常,然后导致Loader
中断
持续地导入数据会使Server
的压力过大,然后导致有些请求超时。可以通过调整Loader
的参数来适当缓解Server
压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。
如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremlin
的g.V().drop()
会报错Vertices in transaction have reached capacity xxx
目前确实没有好办法删除全部的数据,用户如果是自己部署的Server
和后端,可以直接清空数据库,重启Server
。可以使用paging API或scan API先获取所有数据,再逐条删除。
清空了数据库,并且执行了init-store
,但是添加schema
时提示"xxx has existed"
HugeGraphServer
内是有缓存的,清空数据库的同时是需要重启Server
的,否则残留的缓存会产生不一致。
插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}
或 Big id max length is 32768, but got xxx
为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。
是否支持嵌套属性,如果不支持,是否有什么替代方案
嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。
一个EdgeLabel
是否可以连接多对VertexLabel
,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"
一个EdgeLabel
不支持连接多对VertexLabel
,需要用户将EdgeLabel
拆分得更细一点,如:“个人投资”,“企业投资”。
通过RestAPI
发送请求时提示HTTP 415 Unsupported Media Type
请求头中需要指定Content-Type:application/json
其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues
7 - QUERY LANGUAGE
7.1 - HugeGraph Gremlin
概述
HugeGraph支持Apache TinkerPop3的图形遍历查询语言Gremlin。 SQL是关系型数据库查询语言,而Gremlin是一种通用的图数据库查询语言,Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,也可执行图的查询操作。
Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,更主要的是可用于执行图的查询及分析操作。
TinkerPop Features
HugeGraph实现了TinkerPop框架,但是并没有实现TinkerPop所有的特性。
下表列出HugeGraph对TinkerPop各种特性的支持情况:
Graph Features
Name Description Support Computer Determines if the {@code Graph} implementation supports {@link GraphComputer} based processing false Transactions Determines if the {@code Graph} implementations supports transactions. true Persistence Determines if the {@code Graph} implementation supports persisting it’s contents natively to disk.This feature does not refer to every graph’s ability to write to disk via the Gremlin IO packages(.e.g. GraphML), unless the graph natively persists to disk via those options somehow. For example,TinkerGraph does not support this feature as it is a pure in-sideEffects graph. true ThreadedTransactions Determines if the {@code Graph} implementation supports threaded transactions which allow a transaction be executed across multiple threads via {@link Transaction#createThreadedTx()}. false ConcurrentAccess Determines if the {@code Graph} implementation supports more than one connection to the same instance at the same time. For example, Neo4j embedded does not support this feature because concurrent access to the same database files by multiple instances is not possible. However, Neo4j HA could support this feature as each new {@code Graph} instance coordinates with the Neo4j cluster allowing multiple instances to operate on the same database. false
Vertex Features
Name Description Support UserSuppliedIds Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept. false NumericIds Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false StringIds Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false UuidIds Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false CustomIds Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false AnyIds Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}. false AddProperty Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}. true RemoveProperty Determines if an {@link Element} allows properties to be removed. true AddVertices Determines if a {@link Vertex} can be added to the {@code Graph}. true MultiProperties Determines if a {@link Vertex} can support multiple properties with the same key. false DuplicateMultiProperties Determines if a {@link Vertex} can support non-unique values on the same key. For this value to be {@code true}, then {@link #supportsMetaProperties()} must also return true. By default this method, just returns what {@link #supportsMultiProperties()} returns. false MetaProperties Determines if a {@link Vertex} can support properties on vertex properties. It is assumed that a graph will support all the same data types for meta-properties that are supported for regular properties. false RemoveVertices Determines if a {@link Vertex} can be removed from the {@code Graph}. true
Edge Features
Name Description Support UserSuppliedIds Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept. false NumericIds Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false StringIds Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false UuidIds Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false CustomIds Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false AnyIds Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}. false AddProperty Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}. true RemoveProperty Determines if an {@link Element} allows properties to be removed. true AddEdges Determines if an {@link Edge} can be added to a {@code Vertex}. true RemoveEdges Determines if an {@link Edge} can be removed from a {@code Vertex}. true
Data Type Features
Name Description Support BooleanValues true ByteValues true DoubleValues true FloatValues true IntegerValues true LongValues true MapValues Supports setting of a {@code Map} value. The assumption is that the {@code Map} can contain arbitrary serializable values that may or may not be defined as a feature itself false MixedListValues Supports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “mixed” it does not need to contain objects of the same type. false BooleanArrayValues false ByteArrayValues true DoubleArrayValues false FloatArrayValues false IntegerArrayValues false LongArrayValues false SerializableValues false StringArrayValues false StringValues true UniformListValues Supports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “uniform” it must contain objects of the same type. false
Gremlin的步骤
HugeGraph支持Gremlin的所有步骤。有关Gremlin的完整参考信息,请参与Gremlin官网。
步骤 说明 文档 addE 在两个顶点之间添加边 addE step addV 将顶点添加到图形 addV step and 确保所有遍历都返回值 and step as 用于向步骤的输出分配变量的步骤调制器 as step by 与group
和order
配合使用的步骤调制器 by step coalesce 返回第一个返回结果的遍历 coalesce step constant 返回常量值。 与coalesce
配合使用 constant step count 从遍历返回计数 count step dedup 返回已删除重复内容的值 dedup step drop 丢弃值(顶点/边缘) drop step fold 充当用于计算结果聚合值的屏障 fold step group 根据指定的标签将值分组 group step has 用于筛选属性、顶点和边缘。 支持hasLabel
、hasId
、hasNot
和 has
变体 has step inject 将值注入流中 inject step is 用于通过布尔表达式执行筛选器 is step limit 用于限制遍历中的项数 limit step local 本地包装遍历的某个部分,类似于子查询 local step not 用于生成筛选器的求反结果 not step optional 如果生成了某个结果,则返回指定遍历的结果,否则返回调用元素 optional step or 确保至少有一个遍历会返回值 or step order 按指定的排序顺序返回结果 order step path 返回遍历的完整路径 path step project 将属性投影为映射 project step properties 返回指定标签的属性 properties step range 根据指定的值范围进行筛选 range step repeat 将步骤重复指定的次数。 用于循环 repeat step sample 用于对遍历返回的结果采样 sample step select 用于投影遍历返回的结果 select step store 用于遍历返回的非阻塞聚合 store step tree 将顶点中的路径聚合到树中 tree step unfold 将迭代器作为步骤展开 unfold step union 合并多个遍历返回的结果 union step V 包括顶点与边之间的遍历所需的步骤:V
、E
、out
、in
、both
、outE
、inE
、bothE
、outV
、inV
、bothV
和 otherV
order step where 用于筛选遍历返回的结果。 支持 eq
、neq
、lt
、lte
、gt
、gte
和 between
运算符 where step
7.2 - HugeGraph Examples
1 概述
本示例将TitanDB Getting Started 为模板来演示HugeGraph的使用方法。通过对比HugeGraph和TitanDB,了解HugeGraph和TitanDB的差异。
1.1 HugeGraph与TitanDB的异同
HugeGraph和TitanDB都是基于Apache TinkerPop3框架的图数据库,均支持Gremlin图查询语言,在使用方法和接口方面具有很多相似的地方。然而HugeGraph是全新设计开发的,其代码结构清晰,功能较为丰富,接口更为友好等特点。
HugeGraph相对于TitanDB而言,其主要特点如下:
- HugeGraph目前有HugeGraph-API、HugeGraph-Client、HugeGraph-Loader、HugeGraph-Studio、HugeGraph-Spark等完善的工具组件,可以完成系统集成、数据载入、图可视化查询、Spark 连接等功能;
- HugeGraph具有Server和Client的概念,第三方系统可以通过jar引用、client、api等多种方式接入,而TitanDB仅支持jar引用方式接入。
- HugeGraph的Schema需要显式定义,所有的插入和查询均需要通过严格的schema校验,目前暂不支持schema的隐式创建。
- HugeGraph充分利用后端存储系统的特点来实现数据高效存取,而TitanDB以统一的Kv结构无视后端的差异性。
- HugeGraph的更新操作可以实现按需操作(例如:更新某个属性)性能更好。TitanDB的更新是read and update方式。
- HugeGraph的VertexId和EdgeId均支持拼接,可实现自动去重,同时查询性能更好。TitanDB的所有Id均是自动生成,查询需要经索引。
1.2 人物关系图谱
本示例通过Property Graph Model图数据模型来描述希腊神话中各人物角色的关系(也被成为人物关系图谱),具体关系详见下图。
其中,圆形节点代表实体(Vertex),箭头代表关系(Edge),方框的内容为属性。
该关系图谱中有两类顶点,分别是人物(character)和位置(location)如下表:
名称 类型 属性 character vertex name,age,type location vertex name
有六种关系,分别是父子(father)、母子(mother)、兄弟(brother)、战斗(battled)、居住(lives)、拥有宠物(pet) 关于关系图谱的具体信息如下:
名称 类型 source vertex label target vertex label 属性 father edge character character - mother edge character character - brother edge character character - pet edge character character - lives edge character location reason
在HugeGraph中,每个edge label只能作用于一对source vertex label和target vertex label。也就是说,如果一个图内定义了一种关系father连接character和character,那farther就不能再连接其他的vertex labels。
因此本例子将原TitanDB中的monster, god, human, demigod均使用相同的vertex label: character
来表示, 同时增加属性type来标识人物的类型。edge label
与原TitanDB保持一致。当然为了满足edge label
约束,也可以通过调整edge label
的name
来实现。
2 Graph Schema and Data Ingest Examples
HugeGraph需要显示创建Schema,因此需要依次创建PropertyKey、VertexLabel、EdgeLabel,如果有需要索引还需要创建IndexLabel。
2.1 Graph Schema
schema = hugegraph.schema()
+
6.5 - FAQ
How to choose the back-end storage? Choose RocksDB or Cassandra or Hbase or Mysql?
Judge according to your specific needs. Generally, if the stand-alone machine or the data volume is < 10 billion, RocksDB is recommended, and other back-end clusters that use distributed storage are recommended.
Prompt when starting the service: xxx (core dumped) xxx
Please check if the JDK version is Java 11, at least Java 8 is required
The service is started successfully, but there is a prompt similar to “Unable to connect to the backend or the connection is not open” when operating the graph
init-storeBefore starting the service for the first time, you need to use the initialization backend first , and subsequent versions will prompt more clearly and directly.
Do all backends need to be executed before use init-store, and can the serialization options be filled in at will?
Except memorynot required, other backends are required, such as: cassandra
, hbaseand
, rocksdb
, etc. Serialization needs to be one-to-one correspondence and cannot be filled in at will.
Execution init-store
error: Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni3226083071221514754.so: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.10' not found (required by /tmp/librocksdbjni3226083071221514754.so)
RocksDB requires gcc 4.3.0 (GLIBCXX_3.4.10) and above
The error NoHostAvailableException
occurred while executing init-store.sh
.
NoHostAvailableException
means that the Cassandra
service cannot be connected to. If you are sure that you want to use the Cassandra backend, please install and start this service first. As for the message itself, it may not be clear enough, and we will update the documentation to provide further explanation.
The bin
directory contains start-hugegraph.sh
, start-restserver.sh
and start-gremlinserver.sh
. These scripts seem to be related to startup. Which one should be used?
Since version 0.3.3, GremlinServer and RestServer have been merged into HugeGraphServer. To start, use start-hugegraph.sh. The latter two will be removed in future versions.
Two graphs are configured, the names are hugegraph
and hugegraph1
, and the command to start the service is start-hugegraph.sh
. Is only the hugegraph graph opened?
start-hugegraph.sh
will open all graphs under the graphs of gremlin-server.yaml
. The two have no direct relationship in name
After the service starts successfully, garbled characters are returned when using curl
to query all vertices
The batch vertices/edges returned by the server are compressed (gzip), and can be redirected to gunzip
for decompression (curl http://example | gunzip
), or can be sent with the postman
of Firefox
or the restlet
plug-in of Chrome browser. request, the response data will be decompressed automatically.
When using the vertex Id to query the vertex through the RESTful API
, it returns empty, but the vertex does exist
Check the type of the vertex ID. If it is a string type, the “id” part of the API URL needs to be enclosed in double quotes, while for numeric types, it is not necessary to enclose the ID in quotes.
Vertex Id has been double quoted as required, but querying the vertex via the RESTful API still returns empty
Check whether the vertex id contains +
, space
, /
, ?
, %
, &
, and =
reserved characters of these URLs
. If they exist, they need to be encoded. The following table gives the coded values:
special character | encoded value
+------------------| -------------
++ | %2B
+space | %20
+/ | %2F
+? | %3F
+% | %25
+# | %23
+& | %26
+= | %3D
+
Timeout when querying vertices or edges of a certain category (query by label
)
Since the amount of data belonging to a certain label may be relatively large, please add a limit limit.
It is possible to operate the graph through the RESTful API
, but when sending Gremlin
statements, an error is reported: Request Failed(500)
It may be that the configuration of GremlinServer
is wrong, check whether the host
and port
of gremlin-server.yaml
match the gremlinserver.url
of rest-server.properties
, if they do not match, modify them, and then Restart the service.
When using Loader
to import data, a Socket Timeout
exception occurs, and then Loader
is interrupted
Continuously importing data will put too much pressure on the Server
, which will cause some requests to time out. The pressure on Server
can be appropriately relieved by adjusting the parameters of Loader
(such as: number of retries, retry interval, error tolerance, etc.), and reduce the frequency of this problem.
How to delete all vertices and edges. There is no such interface in the RESTful API. Calling g.V().drop()
of gremlin
will report an error Vertices in transaction have reached capacity xxx
At present, there is really no good way to delete all the data. If the user deploys the Server
and the backend by himself, he can directly clear the database and restart the Server
. You can use the paging API or scan API to get all the data first, and then delete them one by one.
The database has been cleared and init-store
has been executed, but when trying to add a schema, the prompt “xxx has existed” appeared.
There is a cache in the HugeGraphServer
, and it is necessary to restart the Server
when the database is cleared, otherwise the residual cache will be inconsistent.
An error is reported during the process of inserting vertices or edges: Id max length is 128, but got xxx {yyy}
or Big id max length is 32768, but got xxx
In order to ensure query performance, the current backend storage limits the length of the id column. The vertex id cannot exceed 128 bytes, the edge id cannot exceed 32768 bytes, and the index id cannot exceed 128 bytes.
Is there support for nested attributes, and if not, are there any alternatives?
Nested attributes are currently not supported. Alternative: Nested attributes can be taken out as individual vertices and connected with edges.
Can an EdgeLabel
connect multiple pairs of VertexLabel
, such as “investment” relationship, which can be “individual” investing in “enterprise”, or “enterprise” investing in “enterprise”?
An EdgeLabel
does not support connecting multiple pairs of VertexLabels
, users need to split the EdgeLabel
into finer details, such as: “personal investment”, “enterprise investment”.
Prompt HTTP 415 Unsupported Media Type
when sending a request through RestAPI
Content-Type: application/json
needs to be specified in the request header
Other issues can be searched in the issue area of the corresponding project, such as Server-Issues / Loader Issues
7 - QUERY LANGUAGE
7.1 - HugeGraph Gremlin
Overview
HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. While SQL is a query language for relational databases, Gremlin is a general-purpose query language for graph databases. Gremlin can be used to create entities (Vertex and Edge) of a graph, modify the properties of entities, delete entities, as well as perform graph queries.
Gremlin can be used to create entities (Vertex and Edge) of a graph, modify the properties of entities, and delete entities. More importantly, it can be used to perform graph querying and analysis operations.
TinkerPop Features
HugeGraph implements the TinkerPop framework, but not all TinkerPop features are implemented.
The table below lists the support status of various TinkerPop features in HugeGraph:
Graph Features
Name Description Support Computer Determines if the {@code Graph} implementation supports {@link GraphComputer} based processing false Transactions Determines if the {@code Graph} implementations supports transactions. true Persistence Determines if the {@code Graph} implementation supports persisting it’s contents natively to disk.This feature does not refer to every graph’s ability to write to disk via the Gremlin IO packages(.e.g. GraphML), unless the graph natively persists to disk via those options somehow. For example,TinkerGraph does not support this feature as it is a pure in-sideEffects graph. true ThreadedTransactions Determines if the {@code Graph} implementation supports threaded transactions which allow a transaction be executed across multiple threads via {@link Transaction#createThreadedTx()}. false ConcurrentAccess Determines if the {@code Graph} implementation supports more than one connection to the same instance at the same time. For example, Neo4j embedded does not support this feature because concurrent access to the same database files by multiple instances is not possible. However, Neo4j HA could support this feature as each new {@code Graph} instance coordinates with the Neo4j cluster allowing multiple instances to operate on the same database. false
Vertex Features
Name Description Support UserSuppliedIds Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept. false NumericIds Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false StringIds Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false UuidIds Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false CustomIds Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false AnyIds Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}. false AddProperty Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}. true RemoveProperty Determines if an {@link Element} allows properties to be removed. true AddVertices Determines if a {@link Vertex} can be added to the {@code Graph}. true MultiProperties Determines if a {@link Vertex} can support multiple properties with the same key. false DuplicateMultiProperties Determines if a {@link Vertex} can support non-unique values on the same key. For this value to be {@code true}, then {@link #supportsMetaProperties()} must also return true. By default this method, just returns what {@link #supportsMultiProperties()} returns. false MetaProperties Determines if a {@link Vertex} can support properties on vertex properties. It is assumed that a graph will support all the same data types for meta-properties that are supported for regular properties. false RemoveVertices Determines if a {@link Vertex} can be removed from the {@code Graph}. true
Edge Features
Name Description Support UserSuppliedIds Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept. false NumericIds Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false StringIds Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false UuidIds Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false CustomIds Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. false AnyIds Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}. false AddProperty Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}. true RemoveProperty Determines if an {@link Element} allows properties to be removed. true AddEdges Determines if an {@link Edge} can be added to a {@code Vertex}. true RemoveEdges Determines if an {@link Edge} can be removed from a {@code Vertex}. true
Data Type Features
Name Description Support BooleanValues true ByteValues true DoubleValues true FloatValues true IntegerValues true LongValues true MapValues Supports setting of a {@code Map} value. The assumption is that the {@code Map} can contain arbitrary serializable values that may or may not be defined as a feature itself false MixedListValues Supports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “mixed” it does not need to contain objects of the same type. false BooleanArrayValues false ByteArrayValues true DoubleArrayValues false FloatArrayValues false IntegerArrayValues false LongArrayValues false SerializableValues false StringArrayValues false StringValues true UniformListValues Supports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “uniform” it must contain objects of the same type. false
Gremlin Steps
HugeGraph supports all steps of Gremlin. For complete reference information about Gremlin, please refer to the Gremlin official website.
Step Description Documentation addE Add an edge between two vertices. addE step addV add vertices to graph. addV step and Make sure all traversals return values. and step as Step modulator for assigning variables to the step’s output. as step by Step Modulators used in conjunction with group and order. by step coalesce Returns the first traversal that returns a result. coalesce step constant Returns a constant value. Used in conjunction with coalesce. constant step count Returns a count from the traversal. count step dedup Returns values with duplicates removed. dedup step drop Discards a value (vertex/edge). drop step fold Acts as a barrier for computing aggregated values from results. fold step group Groups values based on specified labels. group step has Used to filter properties, vertices, and edges. Supports hasLabel
, hasId
, hasNot
, and has
variants. has step inject Injects values into the stream. inject step is Used to filter by a Boolean expression. is step limit Used to limit the number of items in a traversal. limit step local Locally wraps a part of a traversal, similar to a subquery. local step not Used to generate the negation result of a filter. not step optional Returns the result of a specified traversal if it generates any results, otherwise returns the calling element. optional step or Ensures that at least one traversal returns a value. or step order Returns results in the specified order. order step path Returns the full path of the traversal. path step project Projects properties as a map. project step properties Returns properties with specified labels. properties step range Filters based on a specified range of values. range step repeat Repeats a step a specified number of times. Used for looping. repeat step sample Used to sample results returned by the traversal. sample step select Used to project the results returned by the traversal. select step store This step is used fo.r non-blocking aggregation of results returned by traversal store step tree Aggregate the paths in vertices into a tree. tree step unfold Unfolds an iterator as a step. unfold step union Merge the results returned by multiple traversals. union step V These are the steps required for traversing between vertices and edges: V
, E
, out
, in
, both
, outE
, inE
, bothE
, outV
, inV
, bothV
, and otherV
. order step where Used to filter the results returned by a traversal. Supports eq
, neq
, lt
, lte
, gt
, gte
, and between
operators. where step
7.2 - HugeGraph Examples
1 概述
本示例将TitanDB Getting Started 为模板来演示HugeGraph的使用方法。通过对比HugeGraph和TitanDB,了解HugeGraph和TitanDB的差异。
1.1 HugeGraph与TitanDB的异同
HugeGraph和TitanDB都是基于Apache TinkerPop3框架的图数据库,均支持Gremlin图查询语言,在使用方法和接口方面具有很多相似的地方。然而HugeGraph是全新设计开发的,其代码结构清晰,功能较为丰富,接口更为友好等特点。
HugeGraph相对于TitanDB而言,其主要特点如下:
- HugeGraph目前有HugeGraph-API、HugeGraph-Client、HugeGraph-Loader、HugeGraph-Studio、HugeGraph-Spark等完善的工具组件,可以完成系统集成、数据载入、图可视化查询、Spark 连接等功能;
- HugeGraph具有Server和Client的概念,第三方系统可以通过jar引用、client、api等多种方式接入,而TitanDB仅支持jar引用方式接入。
- HugeGraph的Schema需要显式定义,所有的插入和查询均需要通过严格的schema校验,目前暂不支持schema的隐式创建。
- HugeGraph充分利用后端存储系统的特点来实现数据高效存取,而TitanDB以统一的Kv结构无视后端的差异性。
- HugeGraph的更新操作可以实现按需操作(例如:更新某个属性)性能更好。TitanDB的更新是read and update方式。
- HugeGraph的VertexId和EdgeId均支持拼接,可实现自动去重,同时查询性能更好。TitanDB的所有Id均是自动生成,查询需要经索引。
1.2 人物关系图谱
本示例通过Property Graph Model图数据模型来描述希腊神话中各人物角色的关系(也被成为人物关系图谱),具体关系详见下图。
其中,圆形节点代表实体(Vertex),箭头代表关系(Edge),方框的内容为属性。
该关系图谱中有两类顶点,分别是人物(character)和位置(location)如下表:
名称 类型 属性 character vertex name,age,type location vertex name
有六种关系,分别是父子(father)、母子(mother)、兄弟(brother)、战斗(battled)、居住(lives)、拥有宠物(pet) 关于关系图谱的具体信息如下:
名称 类型 source vertex label target vertex label 属性 father edge character character - mother edge character character - brother edge character character - pet edge character character - lives edge character location reason
在HugeGraph中,每个edge label只能作用于一对source vertex label和target vertex label。也就是说,如果一个图内定义了一种关系father连接character和character,那farther就不能再连接其他的vertex labels。
因此本例子将原TitanDB中的monster, god, human, demigod均使用相同的vertex label: character
来表示, 同时增加属性type来标识人物的类型。edge label
与原TitanDB保持一致。当然为了满足edge label
约束,也可以通过调整edge label
的name
来实现。
2 Graph Schema and Data Ingest Examples
HugeGraph需要显示创建Schema,因此需要依次创建PropertyKey、VertexLabel、EdgeLabel,如果有需要索引还需要创建IndexLabel。
2.1 Graph Schema
schema = hugegraph.schema()
schema.propertyKey("name").asText().ifNotExist().create()
schema.propertyKey("age").asInt().ifNotExist().create()
@@ -6573,9 +6567,9 @@
// what is the name of the brother and the name of the place?
g.V(pluto).out('brother').as('god').out('lives').as('place').select('god','place').by('name')
-
推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。
3.2 总结
HugeGraph 目前支持 Gremlin
的语法,用户可以通过 Gremlin / REST-API
实现各种查询需求。
8 - PERFORMANCE
8.1 - HugeGraph BenchMark Performance
1 测试环境
1.1 硬件信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD
1.2 软件信息
1.2.1 测试用例
测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:
Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交
Single Insertion,单条插入,每个顶点或者每条边立即提交
Query,主要是图数据库的基本查询操作:
- Find Neighbors,查询所有顶点的邻居
- Find Adjacent Nodes,查询所有边的邻接顶点
- Find Shortest Path,查询第一个顶点到100个随机顶点的最短路径
Clustering,基于Louvain Method的社区发现算法
1.2.2 测试数据集
测试使用人造数据和真实数据
MIW、SIW和QW使用SNAP数据集
CW使用LFR-Benchmark generator生成的人造数据
本测试用到的数据集规模
名称 vertex数目 edge数目 文件大小 email-enron.txt 36,691 367,661 4MB com-youtube.ungraph.txt 1,157,806 2,987,624 38.7MB amazon0601.txt 403,393 3,387,388 47.9MB com-lj.ungraph.txt 3997961 34681189 479MB
1.3 服务配置
HugeGraph版本:0.5.6,RestServer和Gremlin Server和backends都在同一台服务器上
- RocksDB版本:rocksdbjni-5.8.6
Titan版本:0.5.4, 使用thrift+Cassandra模式
- Cassandra版本:cassandra-3.10,commit-log 和 data 共用SSD
Neo4j版本:2.0.1
graphdb-benchmark适配的Titan版本为0.5.4
2 测试结果
2.1 Batch插入性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) com-lj.ungraph(3000w) HugeGraph 0.629 5.711 5.243 67.033 Titan 10.15 108.569 150.266 1217.944 Neo4j 3.884 18.938 24.890 281.537
说明
- 表头"()“中数据是数据规模,以边为单位
- 表中数据是批量插入的时间,单位是s
- 例如,HugeGraph使用RocksDB插入amazon0601数据集的300w条边,花费5.711s
结论
- 批量插入性能 HugeGraph(RocksDB) > Neo4j > Titan(thrift+Cassandra)
2.2 遍历性能
2.2.1 术语说明
- FN(Find Neighbor), 遍历所有vertex, 根据vertex查邻接edge, 通过edge和vertex查other vertex
- FA(Find Adjacent), 遍历所有edge,根据edge获得source vertex和target vertex
2.2.2 FN性能
Backend email-enron(3.6w) amazon0601(40w) com-youtube.ungraph(120w) com-lj.ungraph(400w) HugeGraph 4.072 45.118 66.006 609.083 Titan 8.084 92.507 184.543 1099.371 Neo4j 2.424 10.537 11.609 106.919
说明
- 表头”()“中数据是数据规模,以顶点为单位
- 表中数据是遍历顶点花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有顶点,并查找邻接边和另一顶点,总共耗时45.118s
2.2.3 FA性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) com-lj.ungraph(3000w) HugeGraph 1.540 10.764 11.243 151.271 Titan 7.361 93.344 169.218 1085.235 Neo4j 1.673 4.775 4.284 40.507
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是遍历边花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有边,并查询每条边的两个顶点,总共耗时10.764s
结论
- 遍历性能 Neo4j > HugeGraph(RocksDB) > Titan(thrift+Cassandra)
2.3 HugeGraph-图常用分析方法性能
术语说明
- FS(Find Shortest Path), 寻找最短路径
- K-neighbor,从起始vertex出发,通过K跳边能够到达的所有顶点, 包括1, 2, 3…(K-1), K跳边可达vertex
- K-out, 从起始vertex出发,恰好经过K跳out边能够到达的顶点
FS性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) com-lj.ungraph(3000w) HugeGraph 0.494 0.103 3.364 8.155 Titan 11.818 0.239 377.709 575.678 Neo4j 1.719 1.800 1.956 8.530
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是找到从第一个顶点出发到达随机选择的100个顶点的最短路径的时间,单位是s
- 例如,HugeGraph使用RocksDB后端在图amazon0601中查找第一个顶点到100个随机顶点的最短路径,总共耗时0.103s
结论
- 在数据规模小或者顶点关联关系少的场景下,HugeGraph性能优于Neo4j和Titan
- 随着数据规模增大且顶点的关联度增高,HugeGraph与Neo4j性能趋近,都远高于Titan
K-neighbor性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.031s 0.033s 0.048s 0.500s 11.27s OOM v111 时间 0.027s 0.034s 0.115 1.36s OOM – v1111 时间 0.039s 0.027s 0.052s 0.511s 10.96s OOM
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
K-out性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.054s 0.057s 0.109s 0.526s 3.77s OOM 度 10 133 2453 50,830 1,128,688 v111 时间 0.032s 0.042s 0.136s 1.25s 20.62s OOM 度 10 211 4944 113150 2,629,970 v1111 时间 0.039s 0.045s 0.053s 1.10s 2.92s OOM 度 10 140 2555 50825 1,070,230
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
结论
- FS场景,HugeGraph性能优于Neo4j和Titan
- K-neighbor和K-out场景,HugeGraph能够实现在5度范围内秒级返回结果
2.4 图综合性能测试-CW
数据库 规模1000 规模5000 规模10000 规模20000 HugeGraph(core) 20.804 242.099 744.780 1700.547 Titan 45.790 820.633 2652.235 9568.623 Neo4j 5.913 50.267 142.354 460.880
说明
- “规模"以顶点为单位
- 表中数据是社区发现完成需要的时间,单位是s,例如HugeGraph使用RocksDB后端在规模10000的数据集,社区聚合不再变化,需要耗时744.780s
- CW测试是CRUD的综合评估
- 该测试中HugeGraph跟Titan一样,没有通过client,直接对core操作
结论
- 社区聚类算法性能 Neo4j > HugeGraph > Titan
8.2 - HugeGraph-API Performance
HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:
- 顶点/边的单条插入
- 顶点/边的批量插入
- 顶点/边的查询
HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:
之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况
8.2.1 - v0.5.6 Stand-alone(RocksDB)
1 测试环境
被压机器信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD,2.7T HDD
- 起压力机器信息:与被压机器同配置
- 测试工具:apache-Jmeter-2.5.1
注:起压机器和被压机器在同一机房
2 测试说明
2.1 名词定义(时间的单位均为ms)
- Samples – 本次场景中一共完成了多少个线程
- Average – 平均响应时间
- Median – 统计意义上面的响应时间的中值
- 90% Line – 所有线程中90%的线程的响应时间都小于xx
- Min – 最小响应时间
- Max – 最大响应时间
- Error – 出错率
- Throughput – 吞吐量
- KB/sec – 以流量做衡量的吞吐量
2.2 底层存储
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
3 性能结果总结
- HugeGraph单条插入顶点和边的速度在每秒1w左右
- 顶点和边的批量插入速度远大于单条插入速度
- 按id查询顶点和边的并发度可达到13000以上,且请求的平均延时小于50ms
4 测试结果及分析
4.1 batch插入
4.1.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
持续时间:5min
顶点的最大插入速度:
####### 结论:
- 并发2200,顶点的吞吐量是2026.8,每秒可处理的数据:2026.8*200=405360/s
边的最大插入速度
####### 结论:
- 并发900,边的吞吐量是776.9,每秒可处理的数据:776.9*500=388450/s
4.2 single插入
4.2.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
顶点的单条插入
####### 结论:
- 并发11500,吞吐量为10730,顶点的单条插入并发能力为11500
边的单条插入
####### 结论:
- 并发9000,吞吐量是8418,边的单条插入并发能力为9000
4.3 按id查询
4.3.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
顶点的按id查询
####### 结论:
- 并发14000,吞吐量是12663,顶点的按id查询的并发能力为14000,平均延时为44ms
边的按id查询
####### 结论:
- 并发13000,吞吐量是12225,边的按id查询的并发能力为13000,平均延时为12ms
8.2.2 - v0.5.6 Cluster(Cassandra)
1 测试环境
被压机器信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD,2.7T HDD
- 起压力机器信息:与被压机器同配置
- 测试工具:apache-Jmeter-2.5.1
注:起压机器和被压机器在同一机房
2 测试说明
2.1 名词定义(时间的单位均为ms)
- Samples – 本次场景中一共完成了多少个线程
- Average – 平均响应时间
- Median – 统计意义上面的响应时间的中值
- 90% Line – 所有线程中90%的线程的响应时间都小于xx
- Min – 最小响应时间
- Max – 最大响应时间
- Error – 出错率
- Throughput – 吞吐量
- KB/sec – 以流量做衡量的吞吐量
2.2 底层存储
后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。
3 性能结果总结
- HugeGraph单条插入顶点和边的速度分别为9000和4500
- 顶点和边的批量插入速度分别为5w/s和15w/s,远大于单条插入速度
- 按id查询顶点和边的并发度可达到12000以上,且请求的平均延时小于70ms
4 测试结果及分析
4.1 batch插入
4.1.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
持续时间:5min
顶点的最大插入速度:
####### 结论:
- 并发3500,顶点的吞吐量是261,每秒可处理的数据:261*200=52200/s
边的最大插入速度
####### 结论:
- 并发1000,边的吞吐量是323,每秒可处理的数据:323*500=161500/s
4.2 single插入
4.2.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
顶点的单条插入
####### 结论:
- 并发9000,吞吐量为8400,顶点的单条插入并发能力为9000
边的单条插入
####### 结论:
- 并发4500,吞吐量是4160,边的单条插入并发能力为4500
4.3 按id查询
4.3.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
顶点的按id查询
####### 结论:
- 并发14500,吞吐量是13576,顶点的按id查询的并发能力为14500,平均延时为11ms
边的按id查询
####### 结论:
- 并发12000,吞吐量是10688,边的按id查询的并发能力为12000,平均延时为63ms
8.2.3 - v0.4.4
1 测试环境
被压机器信息
机器编号 CPU Memory 网卡 磁盘 1 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz 61G 1000Mbps 1.4T HDD 2 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD,2.7T HDD
- 起压力机器信息:与编号 1 机器同配置
- 测试工具:apache-Jmeter-2.5.1
注:起压机器和被压机器在同一机房
2 测试说明
2.1 名词定义(时间的单位均为ms)
- Samples – 本次场景中一共完成了多少个线程
- Average – 平均响应时间
- Median – 统计意义上面的响应时间的中值
- 90% Line – 所有线程中90%的线程的响应时间都小于xx
- Min – 最小响应时间
- Max – 最大响应时间
- Error – 出错率
- Throughput – 吞吐量
- KB/sec – 以流量做衡量的吞吐量
2.2 底层存储
后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。
3 性能结果总结
- HugeGraph每秒能够处理的请求数目上限是7000
- 批量插入速度远大于单条插入,在服务器上测试结果达到22w edges/s,37w vertices/s
- 后端是RocksDB,增大CPU数目和内存大小可以增大批量插入的性能。CPU和内存扩大一倍,性能增加45%-60%
- 批量插入场景,使用SSD替代HDD,性能提升较小,只有3%-5%
4 测试结果及分析
4.1 batch插入
4.1.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
持续时间:5min
顶点和边的最大插入速度(高性能服务器,使用SSD存储RocksDB数据):
结论:
- 并发1000,边的吞吐量是是451,每秒可处理的数据:451*500条=225500/s
- 并发2000,顶点的吞吐量是1842.4,每秒可处理的数据:1842.4*200=368480/s
1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)
结论:
- 同样使用HDD硬盘,CPU和内存增加了1倍
- 边:吞吐量从268提升至426,性能提升了约60%
- 顶点:吞吐量从1263.8提升至1842.4,性能提升了约45%
2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)
结论:
- 边:使用SSD吞吐量451.7,使用HDD吞吐量426.6,性能提升5%
- 顶点:使用SSD吞吐量1842.4,使用HDD吞吐量1794,性能提升约3%
3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)
结论:
- 顶点:1000并发,响应时间7ms和1500并发响应时间1028ms差距悬殊,且吞吐量一直保持在1300左右,因此拐点数据应该在1300 ,且并发1300时,响应时间已达到22ms,在可控范围内,相比HugeGraph 0.2(1000并发:平均响应时间8959ms),处理能力出现质的飞跃;
- 边:从1000并发到2000并发,处理时间过长,超过3s,且吞吐量几乎在270左右浮动,因此继续增大并发线程数吞吐量不会再大幅增长,270 是一个拐点,跟HugeGraph 0.2版本(1000并发:平均响应时间31849ms)相比较,处理能力提升非常明显;
4.2 single插入
4.2.1 压力上限测试
测试方法
不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
结论:
- 顶点:
- 4000并发:正常,无错误率,平均耗时小于1ms, 6000并发无错误,平均耗时5ms,在可接受范围内;
- 8000并发:存在0.01%的错误,已经无法处理,出现connection timeout错误,顶峰应该在7000左右
- 边:
- 4000并发:响应时间1ms,6000并发无任何异常,平均响应时间8ms,主要差异在于 IO network recv和send以及CPU);
- 8000并发:存在0.01%的错误率,平均耗15ms,拐点应该在7000左右,跟顶点结果匹配;
8.2.4 - v0.2
1 测试环境
1.1 软硬件信息
起压和被压机器配置相同,基本参数如下:
CPU Memory 网卡 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz 61G 1000Mbps
测试工具:apache-Jmeter-2.5.1
1.2 服务配置
- HugeGraph版本:0.2
- 后端存储:使用服务内嵌的cassandra-3.10,单点部署;
- 后端配置修改:修改了cassandra.yaml文件中的以下两个属性,其余选项均保持默认
batch_size_warn_threshold_in_kb: 1000
- batch_size_fail_threshold_in_kb: 1000
-
- HugeGraphServer 与 HugeGremlinServer 与cassandra都在同一机器上启动,server 相关的配置文件除主机和端口有修改外,其余均保持默认。
1.3 名词解释
- Samples – 本次场景中一共完成了多少个线程
- Average – 平均响应时间
- Median – 统计意义上面的响应时间的中值
- 90% Line – 所有线程中90%的线程的响应时间都小于xx
- Min – 最小响应时间
- Max – 最大响应时间
- Error – 出错率
- Troughput – 吞吐量Â
- KB/sec – 以流量做衡量的吞吐量
注:时间的单位均为ms
2 测试结果
2.1 schema
Label Samples Average Median 90%Line Min Max Error% Throughput KB/sec property_keys 331000 1 1 2 0 172 0.00% 920.7/sec 178.1 vertex_labels 331000 1 2 2 1 126 0.00% 920.7/sec 193.4 edge_labels 331000 2 2 3 1 158 0.00% 920.7/sec 242.8
结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力
2.2 single 插入
2.2.1 插入速率测试
压力参数
测试方法:固定并发量,测试server和后端的处理速率
- 并发量:1000
- 持续时间:5min
性能指标
Label Samples Average Median 90%Line Min Max Error% Throughput KB/sec single_insert_vertices 331000 0 1 1 0 21 0.00% 920.7/sec 234.4 single_insert_edges 331000 2 2 3 1 53 0.00% 920.7/sec 309.1
结论
- 顶点:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
- 边:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
2.2.2 压力上限测试
测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限
压力参数
- 持续时间:5min
- 服务异常标志:错误率大于0.00%
性能指标
Concurrency Samples Average Median 90%Line Min Max Error% Throughput KB/sec 2000(vertex) 661916 1 1 1 0 3012 0.00% 1842.9/sec 469.1 4000(vertex) 1316124 13 1 14 0 9023 0.00% 3673.1/sec 935.0 5000(vertex) 1468121 1010 1135 1227 0 9223 0.06% 4095.6/sec 1046.0 7000(vertex) 1378454 1617 1708 1886 0 9361 0.08% 3860.3/sec 987.1 2000(edge) 629399 953 1043 1113 1 9001 0.00% 1750.3/sec 587.6 3000(edge) 648364 2258 2404 2500 2 9001 0.00% 1810.7/sec 607.9 4000(edge) 649904 1992 2112 2211 1 9001 0.06% 1812.5/sec 608.5
结论
- 顶点:
- 4000并发:正常,无错误率,平均耗时13ms;
- 5000并发:每秒处理5000个数据的插入,就会存在0.06%的错误,应该已经处理不了了,顶峰应该在4000
- 边:
- 1000并发:响应时间2ms,跟2000并发的响应时间相差较多,主要是 IO network rec和send以及CPU几乎增加了一倍);
- 2000并发:每秒处理2000个数据的插入,平均耗时953ms,平均每秒处理1750个请求;
- 3000并发:每秒处理3000个数据的插入,平均耗时2258ms,平均每秒处理1810个请求;
- 4000并发:每秒处理4000个数据的插入,平均每秒处理1812个请求;
2.3 batch 插入
2.3.1 插入速率测试
压力参数
测试方法:固定并发量,测试server和后端的处理速率
- 并发量:1000
- 持续时间:5min
性能指标
Label Samples Average Median 90%Line Min Max Error% Throughput KB/sec batch_insert_vertices 37162 8959 9595 9704 17 9852 0.00% 103.4/sec 393.3 batch_insert_edges 10800 31849 34544 35132 435 35747 0.00% 28.8/sec 814.9
结论
- 顶点:平均响应时间为8959ms,处理时间过长。每个请求插入199条数据,平均每秒处理103个请求,则每秒平均总共处理的数据为199*131约等于2w条数据;
- 边:平均响应时间31849ms,处理时间过长。每个请求插入499个数据,平均每秒处理28个请求,则每秒平均总共处理的数据为28*499约等于13900条数据;
8.3 - HugeGraph-Loader Performance
使用场景
当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据
性能
测试均采用网址数据的边数据
RocksDB单机性能
- 关闭label index,22.8w edges/s
- 开启label index,15.3w edges/s
Cassandra集群性能
- 默认开启label index,6.3w edges/s
8.4 -
1 测试环境
1.1 硬件信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD
1.2 软件信息
1.2.1 测试用例
测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:
Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交
Single Insertion,单条插入,每个顶点或者每条边立即提交
Query,主要是图数据库的基本查询操作:
- Find Neighbors,查询所有顶点的邻居
- Find Adjacent Nodes,查询所有边的邻接顶点
- Find Shortest Path,查询第一个顶点到100个随机顶点的最短路径
Clustering,基于Louvain Method的社区发现算法
1.2.2 测试数据集
测试使用人造数据和真实数据
MIW、SIW和QW使用SNAP数据集
CW使用LFR-Benchmark generator生成的人造数据
本测试用到的数据集规模
名称 vertex数目 edge数目 文件大小 email-enron.txt 36,691 367,661 4MB com-youtube.ungraph.txt 1,157,806 2,987,624 38.7MB amazon0601.txt 403,393 3,387,388 47.9MB
1.3 服务配置
- HugeGraph版本:0.4.4,RestServer和Gremlin Server和backends都在同一台服务器上
- Cassandra版本:cassandra-3.10,commit-log 和data共用SSD
- RocksDB版本:rocksdbjni-5.8.6
- Titan版本:0.5.4, 使用thrift+Cassandra模式
graphdb-benchmark适配的Titan版本为0.5.4
2 测试结果
2.1 Batch插入性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) Titan 9.516 88.123 111.586 RocksDB 2.345 14.076 16.636 Cassandra 11.930 108.709 101.959 Memory 3.077 15.204 13.841
说明
- 表头"()“中数据是数据规模,以边为单位
- 表中数据是批量插入的时间,单位是s
- 例如,HugeGraph使用RocksDB插入amazon0601数据集的300w条边,花费14.076s,速度约为21w edges/s
结论
- RocksDB和Memory后端插入性能优于Cassandra
- HugeGraph和Titan同样使用Cassandra作为后端的情况下,插入性能接近
2.2 遍历性能
2.2.1 术语说明
- FN(Find Neighbor), 遍历所有vertex, 根据vertex查邻接edge, 通过edge和vertex查other vertex
- FA(Find Adjacent), 遍历所有edge,根据edge获得source vertex和target vertex
2.2.2 FN性能
Backend email-enron(3.6w) amazon0601(40w) com-youtube.ungraph(120w) Titan 7.724 70.935 128.884 RocksDB 8.876 65.852 63.388 Cassandra 13.125 126.959 102.580 Memory 22.309 207.411 165.609
说明
- 表头”()“中数据是数据规模,以顶点为单位
- 表中数据是遍历顶点花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有顶点,并查找邻接边和另一顶点,总共耗时65.852s
2.2.3 FA性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) Titan 7.119 63.353 115.633 RocksDB 6.032 64.526 52.721 Cassandra 9.410 102.766 94.197 Memory 12.340 195.444 140.89
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是遍历边花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有边,并查询每条边的两个顶点,总共耗时64.526s
结论
- HugeGraph RocksDB > Titan thrift+Cassandra > HugeGraph Cassandra > HugeGraph Memory
2.3 HugeGraph-图常用分析方法性能
术语说明
- FS(Find Shortest Path), 寻找最短路径
- K-neighbor,从起始vertex出发,通过K跳边能够到达的所有顶点, 包括1, 2, 3…(K-1), K跳边可达vertex
- K-out, 从起始vertex出发,恰好经过K跳out边能够到达的顶点
FS性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) Titan 11.333 0.313 376.06 RocksDB 44.391 2.221 268.792 Cassandra 39.845 3.337 331.113 Memory 35.638 2.059 388.987
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是找到从第一个顶点出发到达随机选择的100个顶点的最短路径的时间,单位是s
- 例如,HugeGraph使用RocksDB查找第一个顶点到100个随机顶点的最短路径,总共耗时2.059s
结论
- 在数据规模小或者顶点关联关系少的场景下,Titan最短路径性能优于HugeGraph
- 随着数据规模增大且顶点的关联度增高,HugeGraph最短路径性能优于Titan
K-neighbor性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.031s 0.033s 0.048s 0.500s 11.27s OOM v111 时间 0.027s 0.034s 0.115 1.36s OOM – v1111 时间 0.039s 0.027s 0.052s 0.511s 10.96s OOM
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
K-out性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.054s 0.057s 0.109s 0.526s 3.77s OOM 度 10 133 2453 50,830 1,128,688 v111 时间 0.032s 0.042s 0.136s 1.25s 20.62s OOM 度 10 211 4944 113150 2,629,970 v1111 时间 0.039s 0.045s 0.053s 1.10s 2.92s OOM 度 10 140 2555 50825 1,070,230
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
结论
- FS场景,HugeGraph性能优于Titan
- K-neighbor和K-out场景,HugeGraph能够实现在5度范围内秒级返回结果
2.4 图综合性能测试-CW
数据库 规模1000 规模5000 规模10000 规模20000 Titan 45.943 849.168 2737.117 9791.46 Memory(core) 41.077 1825.905 * * Cassandra(core) 39.783 862.744 2423.136 6564.191 RocksDB(core) 33.383 199.894 763.869 1677.813
说明
- “规模"以顶点为单位
- 表中数据是社区发现完成需要的时间,单位是s,例如HugeGraph使用RocksDB后端在规模10000的数据集,社区聚合不再变化,需要耗时763.869s
- “*“表示超过10000s未完成
- CW测试是CRUD的综合评估
- 后三者分别是HugeGraph的不同后端,该测试中HugeGraph跟Titan一样,没有通过client,直接对core操作
结论
- HugeGraph在使用Cassandra后端时,性能略优于Titan,随着数据规模的增大,优势越来越明显,数据规模20000时,比Titan快30%
- HugeGraph在使用RocksDB后端时,性能远高于Titan和HugeGraph的Cassandra后端,分别比两者快了6倍和4倍
9 - Contribution Guidelines
9.1 - How to Contribute to HugeGraph
Thanks for taking the time to contribute! As an open source project, HugeGraph is looking forward to be contributed from everyone, and we are also grateful to all the contributors.
The following is a contribution guide for HugeGraph:
1. Preparation
We can contribute by reporting issues, submitting code patches or any other feedback.
Before submitting the code, we need to do some preparation:
Sign up or login to GitHub: https://github.com
Fork HugeGraph repo from GitHub: https://github.com/apache/incubator-hugegraph/fork
Clone code from fork repo to local: https://github.com/${GITHUB_USER_NAME}/hugegraph
# clone code from remote to local repo
+
推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。
3.2 总结
HugeGraph 目前支持 Gremlin
的语法,用户可以通过 Gremlin / REST-API
实现各种查询需求。
8 - PERFORMANCE
8.1 - HugeGraph BenchMark Performance
1 Test environment
1.1 Hardware information
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD
1.2 Software information
1.2.1 Test cases
Testing is done using the graphdb-benchmark, a benchmark suite for graph databases. This benchmark suite mainly consists of four types of tests:
- Massive Insertion, which involves batch insertion of vertices and edges, with a certain number of vertices or edges being submitted at once.
- Single Insertion, which involves the immediate insertion of each vertex or edge, one at a time.
- Query, which mainly includes the basic query operations of the graph database:
- Find Neighbors, which queries the neighbors of all vertices.
- Find Adjacent Nodes, which queries the adjacent vertices of all edges.
- Find Shortest Path, which queries the shortest path from the first vertex to 100 random vertices.
- Clustering, which is a community detection algorithm based on the Louvain Method.
1.2.2 Test dataset
Tests are conducted using both synthetic and real data.
MIW, SIW, and QW use SNAP datasets:
CW uses synthetic data generated by the LFR-Benchmark generator.
The size of the datasets used in this test are not mentioned.
Name Number of Vertices Number of Edges File Size email-enron.txt 36,691 367,661 4MB com-youtube.ungraph.txt 1,157,806 2,987,624 38.7MB amazon0601.txt 403,393 3,387,388 47.9MB com-lj.ungraph.txt 3997961 34681189 479MB
1.3 Service configuration
HugeGraph version: 0.5.6, RestServer and Gremlin Server and backends are on the same server
- RocksDB version: rocksdbjni-5.8.6
Titan version: 0.5.4, using thrift+Cassandra mode
- Cassandra version: cassandra-3.10, commit-log and data use SSD together
Neo4j version: 2.0.1
The Titan version adapted by graphdb-benchmark is 0.5.4.
2 Test results
2.1 Batch insertion performance
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) com-lj.ungraph(3000w) HugeGraph 0.629 5.711 5.243 67.033 Titan 10.15 108.569 150.266 1217.944 Neo4j 3.884 18.938 24.890 281.537
Instructions
- The data scale is in the table header in terms of edges
- The data in the table is the time for batch insertion, in seconds
- For example, HugeGraph(RocksDB) spent 5.711 seconds to insert 3 million edges of the amazon0601 dataset.
Conclusion
- The performance of batch insertion: HugeGraph(RocksDB) > Neo4j > Titan(thrift+Cassandra)
2.2 Traversal performance
2.2.1 Explanation of terms
- FN(Find Neighbor): Traverse all vertices, find the adjacent edges based on each vertex, and use the edges and vertices to find the other vertices adjacent to the original vertex.
- FA(Find Adjacent): Traverse all edges, get the source vertex and target vertex based on each edge.
2.2.2 FN performance
Backend email-enron(3.6w) amazon0601(40w) com-youtube.ungraph(120w) com-lj.ungraph(400w) HugeGraph 4.072 45.118 66.006 609.083 Titan 8.084 92.507 184.543 1099.371 Neo4j 2.424 10.537 11.609 106.919
Instructions
- The data in the table header “( )” represents the data scale, in terms of vertices.
- The data in the table represents the time spent traversing vertices, in seconds.
- For example, HugeGraph uses the RocksDB backend to traverse all vertices in amazon0601, and search for adjacent edges and another vertex, which takes a total of 45.118 seconds.
2.2.3 FA性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) com-lj.ungraph(3000w) HugeGraph 1.540 10.764 11.243 151.271 Titan 7.361 93.344 169.218 1085.235 Neo4j 1.673 4.775 4.284 40.507
Explanation
- The data size in the header “( )” is based on the number of vertices.
- The data in the table is the time it takes to traverse the vertices, in seconds.
- For example, HugeGraph with RocksDB backend traverses all vertices in the amazon0601 dataset, and looks up adjacent edges and other vertices, taking a total of 45.118 seconds.
Conclusion
- Traversal performance: Neo4j > HugeGraph(RocksDB) > Titan(thrift+Cassandra)
2.3 Performance of Common Graph Analysis Methods in HugeGraph
Terminology Explanation
- FS (Find Shortest Path): finding the shortest path between two vertices
- K-neighbor: all vertices that can be reached by traversing K hops (including 1, 2, 3…(K-1) hops) from the starting vertex
- K-out: all vertices that can be reached by traversing exactly K out-edges from the starting vertex.
FS performance
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) com-lj.ungraph(3000w) HugeGraph 0.494 0.103 3.364 8.155 Titan 11.818 0.239 377.709 575.678 Neo4j 1.719 1.800 1.956 8.530
Explanation
- The data in the header “()” represents the data scale in terms of edges
- The data in the table is the time it takes to find the shortest path from the first vertex to 100 randomly selected vertices in seconds
- For example, HugeGraph using the RocksDB backend to find the shortest path from the first vertex to 100 randomly selected vertices in the amazon0601 graph took a total of 0.103s.
Conclusion
- In scenarios with small data size or few vertex relationships, HugeGraph outperforms Neo4j and Titan.
- As the data size increases and the degree of vertex association increases, the performance of HugeGraph and Neo4j tends to be similar, both far exceeding Titan.
K-neighbor Performance
Vertex Depth Degree 1 Degree 2 Degree 3 Degree 4 Degree 5 Degree 6 v1 Time 0.031s 0.033s 0.048s 0.500s 11.27s OOM v111 Time 0.027s 0.034s 0.115s 1.36s OOM – v1111 Time 0.039s 0.027s 0.052s 0.511s 10.96s OOM
Explanation
- HugeGraph-Server’s JVM memory is set to 32GB and may experience OOM when the data is too large.
K-out performance
Vertex Depth 1st Degree 2nd Degree 3rd Degree 4th Degree 5th Degree 6th Degree v1 Time 0.054s 0.057s 0.109s 0.526s 3.77s OOM Degree 10 133 2453 50,830 1,128,688 v111 Time 0.032s 0.042s 0.136s 1.25s 20.62s OOM Degree 10 211 4944 113150 2,629,970 v1111 Time 0.039s 0.045s 0.053s 1.10s 2.92s OOM Degree 10 140 2555 50825 1,070,230
Explanation
- The JVM memory of HugeGraph-Server is set to 32GB, and OOM may occur when the data is too large.
Conclusion
- In the FS scenario, HugeGraph outperforms Neo4j and Titan in terms of performance.
- In the K-neighbor and K-out scenarios, HugeGraph can achieve results returned within seconds within 5 degrees.
2.4 Comprehensive Performance Test - CW
Database Size 1000 Size 5000 Size 10000 Size 20000 HugeGraph(core) 20.804 242.099 744.780 1700.547 Titan 45.790 820.633 2652.235 9568.623 Neo4j 5.913 50.267 142.354 460.880
Explanation
- The “scale” is based on the number of vertices.
- The data in the table is the time required to complete community discovery, in seconds. For example, if HugeGraph uses the RocksDB backend and operates on a dataset of 10,000 vertices, and the community aggregation is no longer changing, it takes 744.780 seconds.
- The CW test is a comprehensive evaluation of CRUD operations.
- In this test, HugeGraph, like Titan, did not use the client and directly operated on the core.
Conclusion
- Performance of community detection algorithm: Neo4j > HugeGraph > Titan
8.2 - HugeGraph-API Performance
The HugeGraph API performance test mainly tests HugeGraph-Server’s ability to concurrently process RESTful API requests, including:
- Single insertion of vertices/edges
- Batch insertion of vertices/edges
- Vertex/Edge Queries
For the performance test of the RESTful API of each release version of HugeGraph, please refer to:
Starting from version 0.5.6, in addition to providing performance tests for the API with the best performance among the backend types supported by HugeGraph, performance tests for both single machine and cluster environments are now available.
8.2.1 - v0.5.6 Stand-alone(RocksDB)
1 Test environment
Compressed machine information:
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD,2.7T HDD
- Information about the machine used to generate load: configured the same as the machine that is being tested under load.
- Testing tool: Apache JMeter 2.5.1
Note: The load-generating machine and the machine under test are located in the same local network.
2 Test description
2.1 Definition of terms (the unit of time is ms)
- Samples: The total number of threads completed in the current scenario.
- Average: The average response time.
- Median: The statistical median of the response time.
- 90% Line: The response time below which 90% of all threads fall.
- Min: The minimum response time.
- Max: The maximum response time.
- Error: The error rate.
- Throughput: The number of requests processed per unit of time.
- KB/sec: Throughput measured in terms of data transferred per second.
2.2 Underlying storage
RocksDB is used for backend storage, HugeGraph and RocksDB are both started on the same machine, and the configuration files related to the server remain as default except for the modification of the host and port.
3 Summary of performance results
- The speed of inserting a single vertex and edge in HugeGraph is about 1w per second
- The batch insertion speed of vertices and edges is much faster than the single insertion speed
- The concurrency of querying vertices and edges by id can reach more than 13000, and the average delay of requests is less than 50ms
4 Test results and analysis
4.1 batch insertion
4.1.1 Upper limit stress testing
Test methods
The upper limit of stress testing is to continuously increase the concurrency and test whether the server can still provide services normally.
Stress Parameters
Duration: 5 minutes
Maximum insertion speed for vertices:
####### in conclusion:
- With a concurrency of 2200, the throughput for vertices is 2026.8. This means that the system can process data at a rate of 405360 per second (2026.8 * 200).
Maximum insertion speed for edges
####### Conclusion:
- With a concurrency of 900, the throughput for edges is 776.9. This means that the system can process data at a rate of 388450 per second (776.9 * 500).
4.2 Single insertion
4.2.1 Stress limit testing
Test Methods
Stress limit testing is a process of continuously increasing the concurrency level to test the upper limit of the server’s ability to provide normal service.
Stress parameters
- Duration: 5 minutes.
- Service exception indicator: Error rate greater than 0.00%.
Single vertex insertion
####### Conclusion:
- With a concurrency of 11500, the throughput is 10730. This means that the system can handle a single concurrent insertion of vertices at a concurrency level of 11500.
Single edge insertion
####### Conclusion:
- With a concurrency of 9000, the throughput is 8418. This means that the system can handle a single concurrent insertion of edges at a concurrency level of 9000.
4.3 Search by ID
4.3.1 Stress test upper limit
Testing method
Continuously increasing the concurrency level to test the upper limit of the server’s ability to provide service under normal conditions.
stress parameters
- Duration: 5 minutes
- Service abnormality indicator: error rate greater than 0.00%
Querying vertices by ID
####### Conclusion:
- Concurrency is 14,000, throughput is 12,663. The concurrency capacity for querying vertices by ID is 14,000, with an average delay of 44ms.
Querying edges by ID
####### Conclusion:
- Concurrency is 13,000, throughput is 12,225. The concurrency capacity for querying edges by ID is 13,000, with an average delay of 12ms.
8.2.2 - v0.5.6 Cluster(Cassandra)
1 Test environment
Compressed machine information
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD,2.7T HDD
- Starting Pressure Machine Information: Configured the same as the compressed machine.
- Testing tool: Apache JMeter 2.5.1.
Note: The machine used to initiate the load and the machine being tested are located in the same data center (or server room)
2 Test Description
2.1 Definition of terms (the unit of time is ms)
- Samples – The total number of threads completed in this scenario.
- Average – The average response time.
- Median – The median response time in statistical terms.
- 90% Line – The response time below which 90% of all threads fall.
- Min – The minimum response time.
- Max – The maximum response time.
- Error – The error rate.
- Throughput – The number of transactions processed per unit of time.
- KB/sec – The throughput measured in terms of data transmitted per second.
2.2 Low-Level Storage
A 15-node Cassandra cluster is used for backend storage. HugeGraph and the Cassandra cluster are located on separate servers. Server-related configuration files are modified only for host and port settings, while the rest remain default.
3 Summary of Performance Results
- The speed of single vertex and edge insertion in HugeGraph is 9000 and 4500 per second, respectively.
- The speed of bulk vertex and edge insertion is 50,000 and 150,000 per second, respectively, which is much higher than the single insertion speed.
- The concurrency for querying vertices and edges by ID can reach more than 12,000, and the average request delay is less than 70ms.
4 Test Results and Analysis
4.1 Batch Insertion
4.1.1 Pressure Upper Limit Test
Test Method
Continuously increase the concurrency level to test the upper limit of the server’s ability to provide services.
Pressure Parameters
Duration: 5 minutes.
Maximum Insertion Speed of Vertices:
Conclusion:
- At a concurrency level of 3500, the throughput of vertices is 261, and the amount of data processed per second is 52,200 (261 * 200).
Maximum Insertion Speed of Edges:
Conclusion:
- At a concurrency level of 1000, the throughput of edges is 323, and the amount of data processed per second is 161,500 (323 * 500).
4.2 Single Insertion
4.2.1 Pressure Upper Limit Test
Test Method
Continuously increase the concurrency level to test the upper limit of the server’s ability to provide services.
Pressure Parameters
- Duration: 5 minutes.
- Service exception mark: Error rate greater than 0.00%.
Single Insertion of Vertices:
Conclusion:
- At a concurrency level of 9000, the throughput is 8400, and the single-insertion concurrency capability for vertices is 9000.
Single Insertion of Edges:
Conclusion:
- At a concurrency level of 4500, the throughput is 4160, and the single-insertion concurrency capability for edges is 4500.
4.3 Query by ID
4.3.1 Pressure Upper Limit Test
Test Method
Continuously increase the concurrency and test the upper limit of the pressure that the server can still provide services normally.
Pressure Parameters
- Duration: 5 minutes
- Service exception flag: error rate greater than 0.00%
Query by ID for vertices
Conclusion:
- The concurrent capacity of the vertex search by ID is 14500, with a throughput of 13576 and an average delay of 11ms.
Edge search by ID
Conclusion:
- For edge ID-based queries, the server’s concurrent capacity is up to 12,000, with a throughput of 10,688 and an average latency of 63ms.
8.2.3 - v0.4.4
1 Test environment
Target Machine Information
机器编号 CPU Memory NIC (Network Interface Card) Disk 1 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz 61G 1000Mbps 1.4T HDD 2 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD,2.7T HDD
- Pressure testing machine information: Configured the same as machine number 1.
- Testing tool: Apache JMeter 2.5.1.
Note: The pressure testing machine and the machine being tested are in the same room.
2 Test Description
2.1 Definition of terms (the unit of time is ms)
- Samples – The total number of threads completed in this scenario.
- Average – The average response time.
- Median – The median response time in terms of statistical significance.
- 90% Line – The response time of 90% of all threads is less than xx.
- Min – The minimum response time.
- Max – The maximum response time.
- Error – The error rate.
- Throughput – The throughput.
- KB/sec – The throughput measured in terms of traffic.
2.2 Underlying storage
RocksDB is used for backend storage, HugeGraph and RocksDB are both started on the same machine, and the configuration files related to the server remain the default except for the modification of the host and port.
3 Summary of performance results
- The upper limit of the number of requests HugeGraph can handle per second is 7000
- The speed of batch insertion is much higher than that of single insertion, and the test results on the server reach 22w edges/s, 37w vertices/s
- The backend is RocksDB, and increasing the number of CPUs and memory size can improve the performance of batch inserts. Doubling the CPU and memory size can increase performance by 45% to 60%.
- In the batch insertion scenario, using SSD instead of HDD, the performance improvement is small, only 3%-5%
4 Test results and analysis
4.1 Batch insertion
4.1.1 Maximum Pressure Test
Test Methods
Continuously increase the concurrency level and test the upper limit of the server’s ability to provide services normally.
Pressure Parameters
Duration: 5 minutes
Maximum Insertion Speed of Vertices and Edges (High-performance server with SSD storage for RocksDB data):
Conclusion:
- With a concurrency of 1000, the edge throughput is 451, which can process 225,500 data per second: 451 * 500 = 225,500/s.
- With a concurrency of 2000, the vertex throughput is 1842.4, which can process 368,480 data per second: 1842.4 * 200 = 368,480/s.
1. The Impact of CPU and Memory on Insertion Performance (Servers Using HDD Storage for RocksDB Data, Batch Insertion)
Conclusion:
- With the same HDD disk, doubling the CPU and memory size increases edge throughput from 268 to 426, which improves performance by about 60%.
- With the same HDD disk, doubling the CPU and memory size increases vertex throughput from 1263.8 to 1842.4, which improves performance by about 45%.
2. The Impact of SSD and HDD on Insertion Performance (High-performance Servers, Batch Insertion)
Conclusion:
- For edge insertion, using SSD yields a throughput of 451.7, while using HDD yields a throughput of 426.6, which results in a 5% performance improvement.
- For vertex insertion, using SSD yields a throughput of 1842.4, while using HDD yields a throughput of 1794, which results in a performance improvement of about 3%.
3. The Impact of Different Concurrent Thread Numbers on Insertion Performance (Ordinary Servers, HDD Storage for RocksDB Data)
Conclusion:
- For vertices, at 1000 concurrency, the response time is 7ms and at 1500 concurrency, the response time is 1028ms. The throughput remained around 1300, indicating that the inflection point data should be around 1300. At 1300 concurrency, the response time has reached 22ms, which is within a controllable range. Compared to HugeGraph 0.2 (1000 concurrency: average response time 8959ms), the processing capacity has made a qualitative leap.
- For edges, the processing time is too long and exceeds 3 seconds from 1000 to 2000 concurrency, and the throughput almost fluctuates around 270. Therefore, increasing the concurrency will not significantly increase the throughput. 270 is an inflection point, and compared with HugeGraph 0.2 (1000 concurrency: average response time 31849ms), the processing capacity has improved significantly.
4.2 single insertion
4.2.1 Upper Limit Test under Pressure
Test Method
Continuously increase the concurrency level and test the upper limit of the pressure at which the server can still provide normal services.
Pressure Parameters
- Duration: 5 minutes
- Service exception criteria: Error rate greater than 0.00%.
Conclusion:
- Vertices:
- At 4000 concurrent connections, there were no errors, with an average response time of less than 1ms. At 6000 concurrent connections, there were no errors, with an average response time of 5ms, which is acceptable.
- At 8000 concurrent connections, there were 0.01% errors and the system could not handle it, resulting in connection timeout errors. The system’s peak performance should be around 7000 concurrent connections.
- Edges:
- At 4000 concurrent connections, the response time was 1ms. At 6000 concurrent connections, there were no abnormalities, with an average response time of 8ms. The main differences were in IO network recv and send as well as CPU usage.
- At 8000 concurrent connections, there was a 0.01% error rate, with an average response time of 15ms. The turning point should be around 7000 concurrent connections, which matches the vertex results.
8.2.4 - v0.2
1 Test environment
1.1 Software and hardware information
The load testing and target machines have the same configuration, with the following basic parameters:
CPU Memory 网卡 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz 61G 1000Mbps
1.2 Service Configuration
- HugeGraph Version: 0.2
- Backend Storage: Cassandra 3.10, deployed as a single node in the service.
- Backend Configuration Modification: Modified two properties in the cassandra.yaml file, while keeping the rest of the options default:
batch_size_warn_threshold_in_kb: 1000
+ batch_size_fail_threshold_in_kb: 1000
+
- HugeGraphServer, HugeGremlinServer, and Cassandra are all started on the same machine. Configuration files for the servers are modified only for the host and port settings.
1.3 Glossary
- Samples – The total number of threads completed in this scenario.
- Average – The average response time.
- Median – The statistical median of response times.
- 90% Line – The response time below which 90% of all threads fall.
- Min – The minimum response time.
- Max – The maximum response time.
- Error – The error rate.
- Troughput – The number of requests processed per unit of time.
- KB/sec – The throughput measured in kilobytes per second.
Note: All time units are measured in ms.
2 Test Results
2.1 schema
Label Samples Average Median 90%Line Min Max Error% Throughput KB/sec property_keys 331000 1 1 2 0 172 0.00% 920.7/sec 178.1 vertex_labels 331000 1 2 2 1 126 0.00% 920.7/sec 193.4 edge_labels 331000 2 2 3 1 158 0.00% 920.7/sec 242.8
Conclusion: Under the pressure of 1000 concurrent requests lasting for 5 minutes, the average response time for the schema interface is 1-2ms, and there is no pressure.
2.2 Single Insert
2.2.1 Insertion Rate Test
Pressure Parameters
Test Method: Fixed concurrency, test server and backend processing speed.
- Concurrency: 1000
- Duration: 5 minutes
Performance Indicators
Label Samples Average Median 90%Line Min Max Error% Throughput KB/sec single_insert_vertices 331000 0 1 1 0 21 0.00% 920.7/sec 234.4 single_insert_edges 331000 2 2 3 1 53 0.00% 920.7/sec 309.1
Conclusion
- For vertices: average response time of 1ms, with each request inserting one piece of data. With an average of 920 requests processed per second, the average total data processed per second is approximately 920 pieces of data.
- For edges: average response time of 1ms, with each request inserting one piece of data. With an average of 920 requests processed per second, the average total data processed per second is approximately 920 pieces of data.
2.2.2 Stress Test
Test Method: Continuously increase concurrency to test the maximum stress level at which the server can still provide normal services.
Stress Parameters
- Duration: 5 minutes
- Service Exception Flag: Error rate greater than 0.00%
Performance Metrics
Concurrency Samples Average Median 90%Line Min Max Error% Throughput KB/sec 2000(vertex) 661916 1 1 1 0 3012 0.00% 1842.9/sec 469.1 4000(vertex) 1316124 13 1 14 0 9023 0.00% 3673.1/sec 935.0 5000(vertex) 1468121 1010 1135 1227 0 9223 0.06% 4095.6/sec 1046.0 7000(vertex) 1378454 1617 1708 1886 0 9361 0.08% 3860.3/sec 987.1 2000(edge) 629399 953 1043 1113 1 9001 0.00% 1750.3/sec 587.6 3000(edge) 648364 2258 2404 2500 2 9001 0.00% 1810.7/sec 607.9 4000(edge) 649904 1992 2112 2211 1 9001 0.06% 1812.5/sec 608.5
Conclusion
- Vertex:
- 4000 concurrency: normal, no error rate, average time 13ms;
- 5000 concurrency: if 5000 data insertions are processed per second, there will be an error rate of 0.06%, indicating that it cannot be handled. The peak should be at 4000.
- Edge:
- 1000 concurrency: response time is 2ms, which is quite different from the response time of 2000 concurrency, mainly because IO network rec and send, as well as CPU, have almost doubled);
- 2000 concurrency: if 2000 data insertions are processed per second, the average time is 953ms, and the average number of requests processed per second is 1750;
- 3000 concurrency: if 3000 data insertions are processed per second, the average time is 2258ms, and the average number of requests processed per second is 1810;
- 4000 concurrency: if 4000 data insertions are processed per second, the average number of requests processed per second is 1812;
2.3 Batch Insertion
2.3.1 Insertion Rate Test
Pressure Parameters
Test Method: Fix the concurrency and test the processing speed of the server and backend.
- Concurrency: 1000
- Duration: 5 minutes
Performance Indicators
Label Samples Average Median 90%Line Min Max Error% Throughput KB/sec batch_insert_vertices 37162 8959 9595 9704 17 9852 0.00% 103.4/sec 393.3 batch_insert_edges 10800 31849 34544 35132 435 35747 0.00% 28.8/sec 814.9
Conclusion
- Vertex: The average response time is 8959ms, which is too long. Each request inserts 199 data, and the average processing rate is 103 requests per second. Therefore, the average number of data processed per second is about 2w (20,000) data.
- Edge: The average response time is 31849ms, which is too long. Each request inserts 499 data, and the average processing rate is 28 requests per second. Therefore, the average number of data processed per second is about 13900 (13,900) data.
8.3 - HugeGraph-Loader Performance
Use Cases
When the number of graph data to be batch inserted (including vertices and edges) is at the billion level or below, or the total data size is less than TB, the HugeGraph-Loader tool can be used to continuously and quickly import graph data.
Performance
The test uses the edge data of website.
RocksDB single-machine performance
- When label index is turned off, 228k edges/s.
- When label index is turned on, 153k edges/s.
Cassandra cluster performance
- When label index is turned on by default, 63k edges/s.
8.4 -
1 测试环境
1.1 硬件信息
CPU Memory 网卡 磁盘 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 128G 10000Mbps 750GB SSD
1.2 软件信息
1.2.1 测试用例
测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:
Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交
Single Insertion,单条插入,每个顶点或者每条边立即提交
Query,主要是图数据库的基本查询操作:
- Find Neighbors,查询所有顶点的邻居
- Find Adjacent Nodes,查询所有边的邻接顶点
- Find Shortest Path,查询第一个顶点到100个随机顶点的最短路径
Clustering,基于Louvain Method的社区发现算法
1.2.2 测试数据集
测试使用人造数据和真实数据
MIW、SIW和QW使用SNAP数据集
CW使用LFR-Benchmark generator生成的人造数据
本测试用到的数据集规模
名称 vertex数目 edge数目 文件大小 email-enron.txt 36,691 367,661 4MB com-youtube.ungraph.txt 1,157,806 2,987,624 38.7MB amazon0601.txt 403,393 3,387,388 47.9MB
1.3 服务配置
- HugeGraph版本:0.4.4,RestServer和Gremlin Server和backends都在同一台服务器上
- Cassandra版本:cassandra-3.10,commit-log 和data共用SSD
- RocksDB版本:rocksdbjni-5.8.6
- Titan版本:0.5.4, 使用thrift+Cassandra模式
graphdb-benchmark适配的Titan版本为0.5.4
2 测试结果
2.1 Batch插入性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) Titan 9.516 88.123 111.586 RocksDB 2.345 14.076 16.636 Cassandra 11.930 108.709 101.959 Memory 3.077 15.204 13.841
说明
- 表头"()“中数据是数据规模,以边为单位
- 表中数据是批量插入的时间,单位是s
- 例如,HugeGraph使用RocksDB插入amazon0601数据集的300w条边,花费14.076s,速度约为21w edges/s
结论
- RocksDB和Memory后端插入性能优于Cassandra
- HugeGraph和Titan同样使用Cassandra作为后端的情况下,插入性能接近
2.2 遍历性能
2.2.1 术语说明
- FN(Find Neighbor), 遍历所有vertex, 根据vertex查邻接edge, 通过edge和vertex查other vertex
- FA(Find Adjacent), 遍历所有edge,根据edge获得source vertex和target vertex
2.2.2 FN性能
Backend email-enron(3.6w) amazon0601(40w) com-youtube.ungraph(120w) Titan 7.724 70.935 128.884 RocksDB 8.876 65.852 63.388 Cassandra 13.125 126.959 102.580 Memory 22.309 207.411 165.609
说明
- 表头”()“中数据是数据规模,以顶点为单位
- 表中数据是遍历顶点花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有顶点,并查找邻接边和另一顶点,总共耗时65.852s
2.2.3 FA性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) Titan 7.119 63.353 115.633 RocksDB 6.032 64.526 52.721 Cassandra 9.410 102.766 94.197 Memory 12.340 195.444 140.89
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是遍历边花费的时间,单位是s
- 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有边,并查询每条边的两个顶点,总共耗时64.526s
结论
- HugeGraph RocksDB > Titan thrift+Cassandra > HugeGraph Cassandra > HugeGraph Memory
2.3 HugeGraph-图常用分析方法性能
术语说明
- FS(Find Shortest Path), 寻找最短路径
- K-neighbor,从起始vertex出发,通过K跳边能够到达的所有顶点, 包括1, 2, 3…(K-1), K跳边可达vertex
- K-out, 从起始vertex出发,恰好经过K跳out边能够到达的顶点
FS性能
Backend email-enron(30w) amazon0601(300w) com-youtube.ungraph(300w) Titan 11.333 0.313 376.06 RocksDB 44.391 2.221 268.792 Cassandra 39.845 3.337 331.113 Memory 35.638 2.059 388.987
说明
- 表头”()“中数据是数据规模,以边为单位
- 表中数据是找到从第一个顶点出发到达随机选择的100个顶点的最短路径的时间,单位是s
- 例如,HugeGraph使用RocksDB查找第一个顶点到100个随机顶点的最短路径,总共耗时2.059s
结论
- 在数据规模小或者顶点关联关系少的场景下,Titan最短路径性能优于HugeGraph
- 随着数据规模增大且顶点的关联度增高,HugeGraph最短路径性能优于Titan
K-neighbor性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.031s 0.033s 0.048s 0.500s 11.27s OOM v111 时间 0.027s 0.034s 0.115 1.36s OOM – v1111 时间 0.039s 0.027s 0.052s 0.511s 10.96s OOM
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
K-out性能
顶点 深度 一度 二度 三度 四度 五度 六度 v1 时间 0.054s 0.057s 0.109s 0.526s 3.77s OOM 度 10 133 2453 50,830 1,128,688 v111 时间 0.032s 0.042s 0.136s 1.25s 20.62s OOM 度 10 211 4944 113150 2,629,970 v1111 时间 0.039s 0.045s 0.053s 1.10s 2.92s OOM 度 10 140 2555 50825 1,070,230
说明
- HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
结论
- FS场景,HugeGraph性能优于Titan
- K-neighbor和K-out场景,HugeGraph能够实现在5度范围内秒级返回结果
2.4 图综合性能测试-CW
数据库 规模1000 规模5000 规模10000 规模20000 Titan 45.943 849.168 2737.117 9791.46 Memory(core) 41.077 1825.905 * * Cassandra(core) 39.783 862.744 2423.136 6564.191 RocksDB(core) 33.383 199.894 763.869 1677.813
说明
- “规模"以顶点为单位
- 表中数据是社区发现完成需要的时间,单位是s,例如HugeGraph使用RocksDB后端在规模10000的数据集,社区聚合不再变化,需要耗时763.869s
- “*“表示超过10000s未完成
- CW测试是CRUD的综合评估
- 后三者分别是HugeGraph的不同后端,该测试中HugeGraph跟Titan一样,没有通过client,直接对core操作
结论
- HugeGraph在使用Cassandra后端时,性能略优于Titan,随着数据规模的增大,优势越来越明显,数据规模20000时,比Titan快30%
- HugeGraph在使用RocksDB后端时,性能远高于Titan和HugeGraph的Cassandra后端,分别比两者快了6倍和4倍
9 - Contribution Guidelines
9.1 - How to Contribute to HugeGraph
Thanks for taking the time to contribute! As an open source project, HugeGraph is looking forward to be contributed from everyone, and we are also grateful to all the contributors.
The following is a contribution guide for HugeGraph:
1. Preparation
We can contribute by reporting issues, submitting code patches or any other feedback.
Before submitting the code, we need to do some preparation:
Sign up or login to GitHub: https://github.com
Fork HugeGraph repo from GitHub: https://github.com/apache/incubator-hugegraph/fork
Clone code from fork repo to local: https://github.com/${GITHUB_USER_NAME}/hugegraph
# clone code from remote to local repo
git clone https://github.com/${GITHUB_USER_NAME}/hugegraph
Configure local HugeGraph repo
cd hugegraph
@@ -6715,7 +6709,7 @@
3. LICENSE and NOTICE are exist
4. Build successfully on macOS(Big Sur)
5. ....
-
10 - CHANGELOGS
10.1 - HugeGraph 1.0.0 Release Notes
OLTP API & Client Changes
API Changes
- feat(api): support hot set trace through /exception/trace API.
- feat(api): support query by Cypher language.
- feat(api): support swagger UI to viewing API.
Client Changes
- feat(client) support Cypher query API.
- refact(client): change ’limit’ type from long to int.
- feat(client): server bypass for hbase writing (Beta).
Core & Server
Feature Changes
- feat: support Java 11.
- feat(core): support adamic-adar & resource-allocation algorithms.
- feat(hbase): support hash rowkey & pre-init tables.
- feat(core): support query by Cypher language.
- feat(core): support automatic management and fail-over for cluster role.
- feat(core): support 16 OLAP algorithms, like: LPA, Louvain, PageRank, BetweennessCentrality, RingsDetect.
- feat: prepare for Apache release.
Bug Fix
- fix(core): can’t query edges by multi labels + properties.
- fix(core): occasionally NoSuchMethodError Relations().
- fix(core): limit max depth for cycle detection.
- fix(core): traversal contains Tree step has different result.
- fix edge batch update error.
- fix unexpected task status.
- fix(core): edge cache not clear when update or delete associated vertex.
- fix(mysql): run g.V() is error when it’s MySQL backend.
- fix: close exception and server-info EXPIRED_INTERVAL.
- fix: export ConditionP.
- fix: query by within + Text.contains.
- fix: schema label race condition of addIndexLabel/removeIndexLabel.
- fix: limit admin role can drop graph.
- fix: ProfileApi url check & add build package to ignore file.
- fix: can’t shut down when starting with exception.
- fix: Traversal.graph is empty in StepStrategy.apply() with count().is(0).
- fix: possible extra comma before where statement in MySQL backend.
- fix: JNA UnsatisfiedLinkError for Apple M1.
- fix: start RpcServer NPE & args count of ACTION_CLEARED error & example error.
- fix: rpc server not start.
- fix: User-controlled data in numeric cast & remove word dependency.
- fix: closing iterators on errors for Cassandra & Mysql.
Option Changes
- move
raft.endpoint
option from graph scope to server scope.
Other Changes
- refact(core): enhance schema job module.
- refact(raft): improve raft module & test & install snapshot and add peer.
- refact(core): remove early cycle detection & limit max depth.
- cache: fix assert node.next==empty.
- fix apache license conflicts: jnr-posix and jboss-logging.
- chore: add logo in README & remove outdated log4j version.
- refact(core): improve CachedGraphTransaction perf.
- chore: update CI config & support ci robot & add codeQL SEC-check & graph option.
- refact: ignore security check api & fix some bugs & clean code.
- doc: enhance CONTRIBUTING.md & README.md.
- refact: add checkstyle plugin & clean/format the code.
- refact(core): improve decode string empty bytes & avoid array-construct columns in BackendEntry.
- refact(cassandra): translate ipv4 to ipv6 metrics & update cassandra dependency version.
- chore: use .asf.yaml for apache workflow & replace APPLICATION_JSON with TEXT_PLAIN.
- feat: add system schema store.
- refact(rocksdb): update rocksdb version to 6.22 & improve rocksdb code.
- refact: update mysql scope to test & clean protobuf style/configs.
- chore: upgrade Dockerfile server to 0.12.0 & add editorconfig & improve ci.
- chore: upgrade grpc version.
- feat: support updateIfPresent/updateIfAbsent operation.
- chore: modify abnormal logs & upgrade netty-all to 4.1.44.
- refact: upgrade dependencies & adopt new analyzer & clean code.
- chore: improve .gitignore & update ci configs & add RAT/flatten plugin.
- chore(license): add dependencies-check ci & 3rd-party dependency licenses.
- refact: Shutdown log when shutdown process & fix tx leak & enhance the file path.
- refact: rename package to apache & dependency in all modules (Breaking Change).
- chore: add license checker & update antrun plugin & fix building problem in windows.
- feat: support one-step script for apache release v1.0.0 release.
Computer (OLAP)
Algorithm Changes
- feat: implement page-rank algorithm.
- feat: implement wcc algorithm.
- feat: implement degree centrality.
- feat: implement triangle_count algorithm.
- feat: implement rings-detection algorithm.
- feat: implement LPA algorithm.
- feat: implement kcore algorithm.
- feat: implement closeness centrality algorithm.
- feat: implement betweenness centrality algorithm.
- feat: implement cluster coefficient algorithm.
Platform Changes
- feat: init module computer-core & computer-algorithm & etcd dependency.
- feat: add Id as base type of vertex id.
- feat: init Vertex/Edge/Properties & JsonStructGraphOutput.
- feat: load data from hugegraph server.
- feat: init basic combiner, Bsp4Worker, Bsp4Master.
- feat: init sort & transport interface & basic FileInput/Output Stream.
- feat: init computation & ComputerOutput/Driver interface.
- feat: init Partitioner and HashPartitioner
- feat: init Master/WorkerService module.
- feat: init Heap/LoserTree sorting.
- feat: init rpc module.
- feat: init transport server, client, en/decode, flowControl, heartbeat.
- feat: init DataDirManager & PointerCombiner.
- feat: init aggregator module & add copy() and assign() methods to Value class.
- feat: add startAsync and finishAsync on client side, add onStarted and onFinished on server side.
- feat: init store/sort module.
- feat: link managers in worker sending end.
- feat: implement data receiver of worker.
- feat: implement StreamGraphInput and EntryInput.
- feat: add Sender and Receiver to process compute message.
- feat: add seqfile fromat.
- feat: add ComputeManager.
- feat: add computer-k8s and computer-k8s-operator.
- feat: add startup and make docker image code.
- feat: sort different type of message use different combiner.
- feat: add HDFS output format.
- feat: mount config-map and secret to container.
- feat: support java11.
- feat: support partition concurrent compute.
- refact: abstract computer-api from computer-core.
- refact: optimize data receiving.
- fix: release file descriptor after input and compute.
- doc: add operator deploy readme.
- feat: prepare for Apache release.
Toolchain (loader, tools, hubble)
- feat(loader): support use SQL to construct graph.
- feat(loader): support Spark-Loader mode(include jdbc source).
- feat(loader): support Flink-CDC mode.
- fix(loader): fix NPE when loading ORC data.
- fix(loader): fix schema is not cached with Spark/Flink mode.
- fix(loader): fix json deserialize error.
- fix(loader): fix jackson conflicts & missing dependencies.
- feat(hubble): supplementary algorithms UI.
- feat(hubble): support highlighting and hints for Gremlin text.
- feat(hubble): add docker-file for hubble.
- feat(hubble): display packaging log output progress while building.
- fix(hubble): fix port-input placeholder UI.
- feat: prepare for Apache release.
Commons (common,rpc)
- feat: support assert-throws method returning Future.
- feat: add Cnm and Anm to CollectionUtil.
- feat: support custom content-type.
- feat: prepare for Apache release.
Release Details
Please check the release details in each repository:
10.2 - HugeGraph 0.12 Release Notes
API & Client
接口更新
- 支持 https + auth 模式连接图服务 (hugegraph-client #109 #110)
- 统一 kout/kneighbor 等 OLTP 接口的参数命名及默认值(hugegraph-client #122 #123)
- 支持 RESTful 接口利用 P.textcontains() 进行属性全文检索(hugegraph #1312)
- 增加 graph_read_mode API 接口,以切换 OLTP、OLAP 读模式(hugegraph #1332)
- 支持 list/set 类型的聚合属性 aggregate property(hugegraph #1332)
- 权限接口增加 METRICS 资源类型(hugegraph #1355、hugegraph-client #114)
- 权限接口增加 SCHEMA 资源类型(hugegraph #1362、hugegraph-client #117)
- 增加手动 compact API 接口,支持 rocksdb/cassandra/hbase 后端(hugegraph #1378)
- 权限接口增加 login/logout API,支持颁发或回收 Token(hugegraph #1500、hugegraph-client #125)
- 权限接口增加 project API(hugegraph #1504、hugegraph-client #127)
- 增加 OLAP 回写接口,支持 cassandra/rocksdb 后端(hugegraph #1506、hugegraph-client #129)
- 增加返回一个图的所有 Schema 的 API 接口(hugegraph #1567、hugegraph-client #134)
- 变更 property key 创建与更新 API 的 HTTP 返回码为 202(hugegraph #1584)
- 增强 Text.contains() 支持3种格式:“word”、"(word)"、"(word1|word2|word3)"(hugegraph #1652)
- 统一了属性中特殊字符的行为(hugegraph #1670 #1684)
- 支持动态创建图实例、克隆图实例、删除图实例(hugegraph-client #135)
其它修改
- 修复在恢复 index label 时 IndexLabelV56 id 丢失的问题(hugegraph-client #118)
- 为 Edge 类增加 name() 方法(hugegraph-client #121)
Core & Server
功能更新
- 支持动态创建图实例(hugegraph #1065)
- 支持通过 Gremlin 调用 OLTP 算法(hugegraph #1289)
- 支持多集群使用同一个图权限服务,以共享权限信息(hugegraph #1350)
- 支持跨多节点的 Cache 缓存同步(hugegraph #1357)
- 支持 OLTP 算法使用原生集合以降低 GC 压力提升性能(hugegraph #1409)
- 支持对新增的 Raft 节点打快照或恢复快照(hugegraph #1439)
- 支持对集合属性建立二级索引 Secondary Index(hugegraph #1474)
- 支持审计日志,及其压缩、限速等功能(hugegraph #1492 #1493)
- 支持 OLTP 算法使用高性能并行无锁原生集合以提升性能(hugegraph #1552)
BUG修复
- 修复带权最短路径算法(weighted shortest path)NPE问题 (hugegraph #1250)
- 增加 Raft 相关的安全操作白名单(hugegraph #1257)
- 修复 RocksDB 实例未正确关闭的问题(hugegraph #1264)
- 在清空数据 truncate 操作之后,显示的发起写快照 Raft Snapshot(hugegraph #1275)
- 修复 Raft Leader 在收到 Follower 转发请求时未更新缓存的问题(hugegraph #1279)
- 修复带权最短路径算法(weighted shortest path)结果不稳定的问题(hugegraph #1280)
- 修复 rays 算法 limit 参数不生效问题(hugegraph #1284)
- 修复 neighborrank 算法 capacity 参数未检查的问题(hugegraph #1290)
- 修复 PostgreSQL 因为不存在与用户同名的数据库而初始化失败的问题(hugegraph #1293)
- 修复 HBase 后端当启用 Kerberos 时初始化失败的问题(hugegraph #1294)
- 修复 HBase/RocksDB 后端 shard 结束判断错误问题(hugegraph #1306)
- 修复带权最短路径算法(weighted shortest path)未检查目标顶点存在的问题(hugegraph #1307)
- 修复 personalrank/neighborrank 算法中非 String 类型 id 的问题(hugegraph #1310)
- 检查必须是 master 节点才允许调度 gremlin job(hugegraph #1314)
- 修复 g.V().hasLabel().limit(n) 因为索引覆盖导致的部分结果不准确问题(hugegraph #1316)
- 修复 jaccardsimilarity 算法当并集为空时报 NaN 错误的问题(hugegraph #1324)
- 修复 Raft Follower 节点操作 Schema 多节点之间数据不同步问题(hugegraph #1325)
- 修复因为 tx 未关闭导致的 TTL 不生效问题(hugegraph #1330)
- 修复 gremlin job 的执行结果大于 Cassandra 限制但小于任务限制时的异常处理(hugegraph #1334)
- 检查权限接口 auth-delete 和 role-get API 操作时图必须存在(hugegraph #1338)
- 修复异步任务结果中包含 path/tree 时系列化不正常的问题(hugegraph #1351)
- 修复初始化 admin 用户时的 NPE 问题(hugegraph #1360)
- 修复异步任务原子性操作问题,确保 update/get fields 及 re-schedule 的原子性(hugegraph #1361)
- 修复权限 NONE 资源类型的问题(hugegraph #1362)
- 修复启用权限后,truncate 操作报错 SecurityException 及管理员信息丢失问题(hugegraph #1365)
- 修复启用权限后,解析数据忽略了权限异常的问题(hugegraph #1380)
- 修复 AuthManager 在初始化时会尝试连接其它节点的问题(hugegraph #1381)
- 修复特定的 shard 信息导致 base64 解码错误的问题(hugegraph #1383)
- 修复启用权限后,使用 consistent-hash LB 在校验权限时,creator 为空的问题(hugegraph #1385)
- 改进权限中 VAR 资源不再依赖于 VERTEX 资源(hugegraph #1386)
- 规范启用权限后,Schema 操作仅依赖具体的资源(hugegraph #1387)
- 规范启用权限后,部分操作由依赖 STATUS 资源改为依赖 ANY 资源(hugegraph #1391)
- 规范启用权限后,禁止初始化管理员密码为空(hugegraph #1400)
- 检查创建用户时 username/password 不允许为空(hugegraph #1402)
- 修复更新 Label 时,PrimaryKey 或 SortKey 被设置为可空属性的问题(hugegraph #1406)
- 修复 ScyllaDB 丢失分页结果问题(hugegraph #1407)
- 修复带权最短路径算法(weighted shortest path)权重属性强制转换为 double 的问题(hugegraph #1432)
- 统一 OLTP 算法中的 degree 参数命名(hugegraph #1433)
- 修复 fusiformsimilarity 算法当 similars 为空的时候返回所有的顶点问题(hugegraph #1434)
- 改进 paths 算法,当起始点与目标点相同时应该返回空路径(hugegraph #1435)
- 修改 kout/kneighbor 的 limit 参数默认值 10 为 10000000(hugegraph #1436)
- 修复分页信息中的 ‘+’ 被 URL 编码为空格的问题(hugegraph #1437)
- 改进边更新接口的错误提示信息(hugegraph #1443)
- 修复 kout 算法 degree 未在所有 label 范围生效的问题(hugegraph #1459)
- 改进 kneighbor/kout 算法,起始点不允许出现在结果集中(hugegraph #1459 #1463)
- 统一 kout/kneighbor 的 Get 和 Post 版本行为(hugegraph #1470)
- 改进创建边时顶点类型不匹配的错误提示信息(hugegraph #1477)
- 修复 Range Index 的残留索引问题(hugegraph #1498)
- 修复权限操作未失效缓存的问题(hugegraph #1528)
- 修复 sameneighbor 的 limit 参数默认值 10 为 10000000(hugegraph #1530)
- 修复 clear API 不应该所有后端都调用 create snapshot 的问题(hugegraph #1532)
- 修复当 loading 模式时创建 Index Label 阻塞问题(hugegraph #1548)
- 修复增加图到 project 或从 project 移除图的问题(hugegraph #1562)
- 改进权限操作的一些错误提示信息(hugegraph #1563)
- 支持浮点属性设置为 Infinity/NaN 的值(hugegraph #1578)
- 修复 Raft 启用 safe_read 时的 quorum read 问题(hugegraph #1618)
- 修复 token 过期时间配置的单位问题(hugegraph #1625)
- 修复 MySQL Statement 资源泄露问题(hugegraph #1627)
- 修复竞争条件下 Schema.getIndexLabel 获取不到数据的问题(hugegraph #1629)
- 修复 HugeVertex4Insert 无法系列化问题(hugegraph #1630)
- 修复 MySQL count Statement 未关闭问题(hugegraph #1640)
- 修复当删除 Index Label 异常时,导致状态不同步问题(hugegraph #1642)
- 修复 MySQL 执行 gremlin timeout 导致的 statement 未关闭问题(hugegraph #1643)
- 改进 Search Index 以兼容特殊 Unicode 字符:\u0000 to \u0003(hugegraph #1659)
- 修复 #1659 引入的 Char 未转化为 String 的问题(hugegraph #1664)
- 修复 has() + within() 查询时结果异常问题(hugegraph #1680)
- 升级 Log4j 版本到 2.17 以修复安全漏洞(hugegraph #1686 #1698 #1702)
- 修复 HBase 后端 shard scan 中 startkey 包含空串时 NPE 问题(hugegraph #1691)
- 修复 paths 算法在深层环路遍历时性能下降问题 (hugegraph #1694)
- 改进 personalrank 算法的参数默认值及错误检查(hugegraph #1695)
- 修复 RESTful 接口 P.within 条件不生效问题(hugegraph #1704)
- 修复启用权限时无法动态创建图的问题(hugegraph #1708)
配置项修改:
- 共享 SSL 相关配置项命名(hugegraph #1260)
- 支持 RocksDB 配置项 rocksdb.level_compaction_dynamic_level_bytes(hugegraph #1262)
- 去除 RESFful Server 服务协议配置项 restserver.protocol,自动提取 URL 中的 Schema(hugegraph #1272)
- 增加 PostgreSQL 配置项 jdbc.postgresql.connect_database(hugegraph #1293)
- 增加针对顶点主键是否编码的配置项 vertex.encode_primary_key_number(hugegraph #1323)
- 增加针对聚合查询是否启用索引优化的配置项 query.optimize_aggregate_by_index(hugegraph #1549)
- 修改 cache_type 的默认值 l1 为 l2(hugegraph #1681)
- 增加 JDBC 强制重连配置项 jdbc.forced_auto_reconnect(hugegraph #1710)
其它修改
- 增加默认的 SSL Certificate 文件(hugegraph #1254)
- OLTP 并行请求共享线程池,而非每个请求使用单独的线程池(hugegraph #1258)
- 修复 Example 的问题(hugegraph #1308)
- 使用 jraft 版本 1.3.5(hugegraph #1313)
- 如果启用了 Raft 模式时,关闭 RocksDB 的 WAL(hugegraph #1318)
- 使用 TarLz4Util 来提升快照 Snapshot 压缩的性能(hugegraph #1336)
- 升级存储的版本号(store version),因为 property key 增加了 read frequency(hugegraph #1341)
- 顶点/边 vertex/edge 的 Get API 使用 queryVertex/queryEdge 方法来替代 iterator 方法(hugegraph #1345)
- 支持 BFS 优化的多度查询(hugegraph #1359)
- 改进 RocksDB deleteRange() 带来的查询性能问题(hugegraph #1375)
- 修复 travis-ci cannot find symbol Namifiable 问题(hugegraph #1376)
- 确保 RocksDB 快照的磁盘与 data path 指定的一致(hugegraph #1392)
- 修复 MacOS 空闲内存 free_memory 计算不准确问题(hugegraph #1396)
- 增加 Raft onBusy 回调来配合限速(hugegraph #1401)
- 升级 netty-all 版本 4.1.13.Final 到 4.1.42.Final(hugegraph #1403)
- 支持 TaskScheduler 暂停当设置为 loading 模式时(hugegraph #1414)
- 修复 raft-tools 脚本的问题(hugegraph #1416)
- 修复 license params 问题(hugegraph #1420)
- 提升写权限日志的性能,通过 batch flush & async write 方式改进(hugegraph #1448)
- 增加 MySQL 连接 URL 的日志记录(hugegraph #1451)
- 提升用户信息校验性能(hugegraph# 1460)
- 修复 TTL 因为起始时间问题导致的错误(hugegraph #1478)
- 支持日志配置的热加载及对审计日志的压缩(hugegraph #1492)
- 支持针对用户级别的审计日志的限速(hugegraph #1493)
- 缓存 RamCache 支持用户自定义的过期时间(hugegraph #1494)
- 在 auth client 端缓存 login role 以避免重复的 RPC 调用(hugegraph #1507)
- 修复 IdSet.contains() 未复写 AbstractCollection.contains() 问题(hugegraph #1511)
- 修复当 commitPartOfEdgeDeletions() 失败时,未回滚 rollback 的问题(hugegraph #1513)
- 提升 Cache metrics 性能(hugegraph #1515)
- 当发生 license 操作错误时,增加打印异常日志(hugegraph #1522)
- 改进 SimilarsMap 实现(hugegraph #1523)
- 使用 tokenless 方式来更新 coverage(hugegraph #1529)
- 改进 project update 接口的代码(hugegraph #1537)
- 允许从 option() 访问 GRAPH_STORE(hugegraph #1546)
- 优化 kout/kneighbor 的 count 查询以避免拷贝集合(hugegraph #1550)
- 优化 shortestpath 遍历方式,以数据量少的一端优先遍历(hugegraph #1569)
- 完善 rocksdb.data_disks 配置项的 allowed keys 提示信息(hugegraph #1585)
- 为 number id 优化 OLTP 遍历中的 id2code 方法性能(hugegraph #1623)
- 优化 HugeElement.getProperties() 返回 Collection<Property>(hugegraph #1624)
- 增加 APACHE PROPOSAL 文件(hugegraph #1644)
- 改进 close tx 的流程(hugegraph #1655)
- 当 reset() 时为 MySQL close 捕获所有类型异常(hugegraph #1661)
- 改进 OLAP property 模块代码(hugegraph #1675)
- 改进查询模块的执行性能(hugegraph #1711)
Loader
- 支持导入 Parquet 格式文件(hugegraph-loader #174)
- 支持 HDFS Kerberos 权限验证(hugegraph-loader #176)
- 支持 HTTPS 协议连接到服务端导入数据(hugegraph-loader #183)
- 修复 trust store file 路径问题(hugegraph-loader #186)
- 处理 loading mode 重置的异常(hugegraph-loader #187)
- 增加在插入数据时对非空属性的检查(hugegraph-loader #190)
- 修复客户端与服务端时区不同导致的时间判断问题(hugegraph-loader #192)
- 优化数据解析性能(hugegraph-loader #194)
- 当用户指定了文件头时,检查其必须不为空(hugegraph-loader #195)
- 修复示例程序中 MySQL struct.json 格式问题(hugegraph-loader #198)
- 修复顶点边导入速度不精确的问题(hugegraph-loader #200 #205)
- 当导入启用 check-vertex 时,确保先导入顶点再导入边(hugegraph-loader #206)
- 修复边 Json 数据导入格式不统一时数组溢出的问题(hugegraph-loader #211)
- 修复因边 mapping 文件不存在导致的 NPE 问题(hugegraph-loader #213)
- 修复读取时间可能出现负数的问题(hugegraph-loader #215)
- 改进目录文件的日志打印(hugegraph-loader #223)
- 改进 loader 的的 Schema 处理流程(hugegraph-loader #230)
Tools
- 支持 HTTPS 协议(hugegraph-tools #71)
- 移除 –protocol 参数,直接从URL中自动提取(hugegraph-tools #72)
- 支持将数据 dump 到 HDFS 文件系统(hugegraph-tools #73)
- 修复 trust store file 路径问题(hugegraph-tools #75)
- 支持权限信息的备份恢复(hugegraph-tools #76)
- 支持无参数的 Printer 打印(hugegraph-tools #79)
- 修复 macOS free_memory 计算问题(hugegraph-tools #82)
- 支持备份恢复时指定线程数hugegraph-tools #83)
- 支持动态创建图、克隆图、删除图等命令(hugegraph-tools #95)
11 -
Contributor Agreement
Individual Contributor exclusive License Agreement
(including the TRADITIONAL PATENT LICENSE OPTION)
Thank you for your interest in contributing to HugeGraph’s all projects (“We” or “Us”).
The purpose of this contributor agreement (“Agreement”) is to clarify and document the rights granted by contributors to Us. To make this document effective, please follow the comment of GitHub CLA-Assistant when submitting a new pull request.
How to use this Contributor Agreement
If You are an employee and have created the Contribution as part of your employment, You need to have Your employer approve this Agreement or sign the Entity version of this document. If You do not own the Copyright in the entire work of authorship, any other author of the Contribution should also sign this – in any event, please contact Us at hugegraph@googlegroups.com
1. Definitions
“You” means the individual Copyright owner who Submits a Contribution to Us.
“Contribution” means any original work of authorship, including any original modifications or additions to an existing work of authorship, Submitted by You to Us, in which You own the Copyright.
“Copyright” means all rights protecting works of authorship, including copyright, moral and neighboring rights, as appropriate, for the full term of their existence.
“Material” means the software or documentation made available by Us to third parties. When this Agreement covers more than one software project, the Material means the software or documentation to which the Contribution was Submitted. After You Submit the Contribution, it may be included in the Material.
“Submit” means any act by which a Contribution is transferred to Us by You by means of tangible or intangible media, including but not limited to electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Us, but excluding any transfer that is conspicuously marked or otherwise designated in writing by You as “Not a Contribution.”
“Documentation” means any non-software portion of a Contribution.
2. License grant
2.1 Copyright license to Us
Subject to the terms and conditions of this Agreement, You hereby grant to Us a worldwide, royalty-free, Exclusive, perpetual and irrevocable (except as stated in Section 8.2) license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:
- publish the Contribution,
- modify the Contribution,
- prepare derivative works based upon or containing the Contribution and/or to combine the Contribution with other Materials,
- reproduce the Contribution in original or modified form,
- distribute, to make the Contribution available to the public, display and publicly perform the Contribution in original or modified form.
2.2 Moral rights
Moral Rights remain unaffected to the extent they are recognized and not waivable by applicable law. Notwithstanding, You may add your name to the attribution mechanism customary used in the Materials you Contribute to, such as the header of the source code files of Your Contribution, and We will respect this attribution when using Your Contribution.
2.3 Copyright license back to You
Upon such grant of rights to Us, We immediately grant to You a worldwide, royalty-free, non-exclusive, perpetual and irrevocable license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:
- publish the Contribution,
- modify the Contribution,
- prepare derivative works based upon or containing the Contribution and/or to combine the Contribution with other Materials,
- reproduce the Contribution in original or modified form,
- distribute, to make the Contribution available to the public, display and publicly perform the Contribution in original or modified form.
This license back is limited to the Contribution and does not provide any rights to the Material.
3. Patents
3.1 Patent license
Subject to the terms and conditions of this Agreement You hereby grant to Us and to recipients of Materials distributed by Us a worldwide, royalty-free, non-exclusive, perpetual and irrevocable (except as stated in Section 3.2) patent license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, to make, have made, use, sell, offer for sale, import and otherwise transfer the Contribution and the Contribution in combination with any Material (and portions of such combination). This license applies to all patents owned or controlled by You, whether already acquired or hereafter acquired, that would be infringed by making, having made, using, selling, offering for sale, importing or otherwise transferring of Your Contribution(s) alone or by combination of Your Contribution(s) with any Material.
3.2 Revocation of patent license
You reserve the right to revoke the patent license stated in section 3.1 if We make any infringement claim that is targeted at your Contribution and not asserted for a Defensive Purpose. An assertion of claims of the Patents shall be considered for a “Defensive Purpose” if the claims are asserted against an entity that has filed, maintained, threatened, or voluntarily participated in a patent infringement lawsuit against Us or any of Our licensees.
4. License obligations by Us
We agree to (sub)license the Contribution or any Materials containing, based on or derived from your Contribution under the terms of any licenses the Free Software Foundation classifies as Free Software License and which are approved by the Open Source Initiative as Open Source licenses.
More specifically and in strict accordance with the above paragraph, we agree to (sub)license the Contribution or any Materials containing, based on or derived from the Contribution only in accordance with our licensing policy available at: http://www.apache.org/licenses/LICENSE-2.0.
In addition, We may use the following licenses for Documentation in the Contribution: GFDL-1.2 (including any right to adopt any future version of a license).
We agree to license patents owned or controlled by You only to the extent necessary to (sub)license Your Contribution(s) and the combination of Your Contribution(s) with the Material under the terms of any licenses the Free Software Foundation classifies as Free Software licenses and which are approved by the Open Source Initiative as Open Source licenses..
5. Disclaimer
THE CONTRIBUTION IS PROVIDED “AS IS”. MORE PARTICULARLY, ALL EXPRESS OR IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE EXPRESSLY DISCLAIMED BY YOU TO US AND BY US TO YOU. TO THE EXTENT THAT ANY SUCH WARRANTIES CANNOT BE DISCLAIMED, SUCH WARRANTY IS LIMITED IN DURATION AND EXTENT TO THE MINIMUM PERIOD AND EXTENT PERMITTED BY LAW.
6. Consequential damage waiver
TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL YOU OR WE BE LIABLE FOR ANY LOSS OF PROFITS, LOSS OF ANTICIPATED SAVINGS, LOSS OF DATA, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL AND EXEMPLARY DAMAGES ARISING OUT OF THIS AGREEMENT REGARDLESS OF THE LEGAL OR EQUITABLE THEORY (CONTRACT, TORT OR OTHERWISE) UPON WHICH THE CLAIM IS BASED.
7. Approximation of disclaimer and damage waiver
IF THE DISCLAIMER AND DAMAGE WAIVER MENTIONED IN SECTION 5. AND SECTION 6. CANNOT BE GIVEN LEGAL EFFECT UNDER APPLICABLE LOCAL LAW, REVIEWING COURTS SHALL APPLY LOCAL LAW THAT MOST CLOSELY APPROXIMATES AN ABSOLUTE WAIVER OF ALL CIVIL OR CONTRACTUAL LIABILITY IN CONNECTION WITH THE CONTRIBUTION.
8. Term
8.1 This Agreement shall come into effect upon Your acceptance of the terms and conditions.
8.2 This Agreement shall apply for the term of the copyright and patents licensed here. However, You shall have the right to terminate the Agreement if We do not fulfill the obligations as set forth in Section 4. Such termination must be made in writing.
8.3 In the event of a termination of this Agreement Sections 5, 6, 7, 8 and 9 shall survive such termination and shall remain in full force thereafter. For the avoidance of doubt, Free and Open Source Software (sub)licenses that have already been granted for Contributions at the date of the termination shall remain in full force after the termination of this Agreement.
9 Miscellaneous
9.1 This Agreement and all disputes, claims, actions, suits or other proceedings arising out of this agreement or relating in any way to it shall be governed by the laws of China excluding its private international law provisions.
9.2 This Agreement sets out the entire agreement between You and Us for Your Contributions to Us and overrides all other agreements or understandings.
9.3 In case of Your death, this agreement shall continue with Your heirs. In case of more than one heir, all heirs must exercise their rights through a commonly authorized person.
9.4 If any provision of this Agreement is found void and unenforceable, such provision will be replaced to the extent possible with a provision that comes closest to the meaning of the original provision and that is enforceable. The terms and conditions set forth in this Agreement shall apply notwithstanding any failure of essential purpose of this Agreement or any limited remedy to the maximum extent possible under law.
9.5 You agree to notify Us of any facts or circumstances of which you become aware that would make this Agreement inaccurate in any respect.
12 -
HugeGraph Docs
Quickstart
- Install HugeGraph-Server
- Load data with HugeGraph-Loader
- Manage with HugeGraph-Tools
- Visual with HugeGraph-Hubble
- Display with HugeGraph-Studio
- Develop with HugeGraph-Client
- Analysis with HugeGraph-Computer
Config
API
Guides
Query Language
Performance
ChangeLogs
+
raft.endpoint
option from graph scope to server scope.Please check the release details in each repository:
The following are the bug fixes made in the context of computer science:
g.V().hasLabel().limit(n)
(HugeGraph #1316)rocksdb.level_compaction_dynamic_level_bytes
(hugegraph #1262).restserver.protocol
and automatically extract schema from the URL (hugegraph #1272).jdbc.postgresql.connect_database
(hugegraph #1293).vertex.encode_primary_key_number
for specifying whether vertex primary keys should be encoded (hugegraph #1323).query.optimize_aggregate_by_index
for enabling index optimization in aggregate queries (hugegraph #1549).cache_type
from l1
to l2
(hugegraph #1681).jdbc.forced_auto_reconnect
(hugegraph #1710).Thank you for your interest in contributing to HugeGraph’s all projects (“We” or “Us”).
The purpose of this contributor agreement (“Agreement”) is to clarify and document the rights granted by contributors to Us. To make this document effective, please follow the comment of GitHub CLA-Assistant when submitting a new pull request.
If You are an employee and have created the Contribution as part of your employment, You need to have Your employer approve this Agreement or sign the Entity version of this document. If You do not own the Copyright in the entire work of authorship, any other author of the Contribution should also sign this – in any event, please contact Us at hugegraph@googlegroups.com
“You” means the individual Copyright owner who Submits a Contribution to Us.
“Contribution” means any original work of authorship, including any original modifications or additions to an existing work of authorship, Submitted by You to Us, in which You own the Copyright.
“Copyright” means all rights protecting works of authorship, including copyright, moral and neighboring rights, as appropriate, for the full term of their existence.
“Material” means the software or documentation made available by Us to third parties. When this Agreement covers more than one software project, the Material means the software or documentation to which the Contribution was Submitted. After You Submit the Contribution, it may be included in the Material.
“Submit” means any act by which a Contribution is transferred to Us by You by means of tangible or intangible media, including but not limited to electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Us, but excluding any transfer that is conspicuously marked or otherwise designated in writing by You as “Not a Contribution.”
“Documentation” means any non-software portion of a Contribution.
Subject to the terms and conditions of this Agreement, You hereby grant to Us a worldwide, royalty-free, Exclusive, perpetual and irrevocable (except as stated in Section 8.2) license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:
Moral Rights remain unaffected to the extent they are recognized and not waivable by applicable law. Notwithstanding, You may add your name to the attribution mechanism customary used in the Materials you Contribute to, such as the header of the source code files of Your Contribution, and We will respect this attribution when using Your Contribution.
Upon such grant of rights to Us, We immediately grant to You a worldwide, royalty-free, non-exclusive, perpetual and irrevocable license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:
This license back is limited to the Contribution and does not provide any rights to the Material.
Subject to the terms and conditions of this Agreement You hereby grant to Us and to recipients of Materials distributed by Us a worldwide, royalty-free, non-exclusive, perpetual and irrevocable (except as stated in Section 3.2) patent license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, to make, have made, use, sell, offer for sale, import and otherwise transfer the Contribution and the Contribution in combination with any Material (and portions of such combination). This license applies to all patents owned or controlled by You, whether already acquired or hereafter acquired, that would be infringed by making, having made, using, selling, offering for sale, importing or otherwise transferring of Your Contribution(s) alone or by combination of Your Contribution(s) with any Material.
You reserve the right to revoke the patent license stated in section 3.1 if We make any infringement claim that is targeted at your Contribution and not asserted for a Defensive Purpose. An assertion of claims of the Patents shall be considered for a “Defensive Purpose” if the claims are asserted against an entity that has filed, maintained, threatened, or voluntarily participated in a patent infringement lawsuit against Us or any of Our licensees.
We agree to (sub)license the Contribution or any Materials containing, based on or derived from your Contribution under the terms of any licenses the Free Software Foundation classifies as Free Software License and which are approved by the Open Source Initiative as Open Source licenses.
More specifically and in strict accordance with the above paragraph, we agree to (sub)license the Contribution or any Materials containing, based on or derived from the Contribution only in accordance with our licensing policy available at: http://www.apache.org/licenses/LICENSE-2.0.
In addition, We may use the following licenses for Documentation in the Contribution: GFDL-1.2 (including any right to adopt any future version of a license).
We agree to license patents owned or controlled by You only to the extent necessary to (sub)license Your Contribution(s) and the combination of Your Contribution(s) with the Material under the terms of any licenses the Free Software Foundation classifies as Free Software licenses and which are approved by the Open Source Initiative as Open Source licenses..
THE CONTRIBUTION IS PROVIDED “AS IS”. MORE PARTICULARLY, ALL EXPRESS OR IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE EXPRESSLY DISCLAIMED BY YOU TO US AND BY US TO YOU. TO THE EXTENT THAT ANY SUCH WARRANTIES CANNOT BE DISCLAIMED, SUCH WARRANTY IS LIMITED IN DURATION AND EXTENT TO THE MINIMUM PERIOD AND EXTENT PERMITTED BY LAW.
TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL YOU OR WE BE LIABLE FOR ANY LOSS OF PROFITS, LOSS OF ANTICIPATED SAVINGS, LOSS OF DATA, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL AND EXEMPLARY DAMAGES ARISING OUT OF THIS AGREEMENT REGARDLESS OF THE LEGAL OR EQUITABLE THEORY (CONTRACT, TORT OR OTHERWISE) UPON WHICH THE CLAIM IS BASED.
IF THE DISCLAIMER AND DAMAGE WAIVER MENTIONED IN SECTION 5. AND SECTION 6. CANNOT BE GIVEN LEGAL EFFECT UNDER APPLICABLE LOCAL LAW, REVIEWING COURTS SHALL APPLY LOCAL LAW THAT MOST CLOSELY APPROXIMATES AN ABSOLUTE WAIVER OF ALL CIVIL OR CONTRACTUAL LIABILITY IN CONNECTION WITH THE CONTRIBUTION.
8.1 This Agreement shall come into effect upon Your acceptance of the terms and conditions.
8.2 This Agreement shall apply for the term of the copyright and patents licensed here. However, You shall have the right to terminate the Agreement if We do not fulfill the obligations as set forth in Section 4. Such termination must be made in writing.
8.3 In the event of a termination of this Agreement Sections 5, 6, 7, 8 and 9 shall survive such termination and shall remain in full force thereafter. For the avoidance of doubt, Free and Open Source Software (sub)licenses that have already been granted for Contributions at the date of the termination shall remain in full force after the termination of this Agreement.
9.1 This Agreement and all disputes, claims, actions, suits or other proceedings arising out of this agreement or relating in any way to it shall be governed by the laws of China excluding its private international law provisions.
9.2 This Agreement sets out the entire agreement between You and Us for Your Contributions to Us and overrides all other agreements or understandings.
9.3 In case of Your death, this agreement shall continue with Your heirs. In case of more than one heir, all heirs must exercise their rights through a commonly authorized person.
9.4 If any provision of this Agreement is found void and unenforceable, such provision will be replaced to the extent possible with a provision that comes closest to the meaning of the original provision and that is enforceable. The terms and conditions set forth in this Agreement shall apply notwithstanding any failure of essential purpose of this Agreement or any limited remedy to the maximum extent possible under law.
9.5 You agree to notify Us of any facts or circumstances of which you become aware that would make this Agreement inaccurate in any respect.
This is the multi-page printable view of this section. -Click here to print.
raft.endpoint
option from graph scope to server scope.Please check the release details in each repository:
Return to the regular view of this page.
raft.endpoint
option from graph scope to server scope.Please check the release details in each repository:
The following are the bug fixes made in the context of computer science:
g.V().hasLabel().limit(n)
(HugeGraph #1316)rocksdb.level_compaction_dynamic_level_bytes
(hugegraph #1262).restserver.protocol
and automatically extract schema from the URL (hugegraph #1272).jdbc.postgresql.connect_database
(hugegraph #1293).vertex.encode_primary_key_number
for specifying whether vertex primary keys should be encoded (hugegraph #1323).query.optimize_aggregate_by_index
for enabling index optimization in aggregate queries (hugegraph #1549).cache_type
from l1
to l2
(hugegraph #1681).jdbc.forced_auto_reconnect
(hugegraph #1710).The following are the bug fixes made in the context of computer science:
g.V().hasLabel().limit(n)
(HugeGraph #1316)rocksdb.level_compaction_dynamic_level_bytes
(hugegraph #1262).restserver.protocol
and automatically extract schema from the URL (hugegraph #1272).jdbc.postgresql.connect_database
(hugegraph #1293).vertex.encode_primary_key_number
for specifying whether vertex primary keys should be encoded (hugegraph #1323).query.optimize_aggregate_by_index
for enabling index optimization in aggregate queries (hugegraph #1549).cache_type
from l1
to l2
(hugegraph #1681).jdbc.forced_auto_reconnect
(hugegraph #1710).