From 1b4e808f6f4a4554951ddc7c64b81ec14424bbb5 Mon Sep 17 00:00:00 2001 From: Benjie Gillam Date: Fri, 28 Apr 2023 16:58:49 +0100 Subject: [PATCH 01/39] Extract common logic from ExecuteQuery, ExecuteMutation and ExecuteSubscriptionEvent --- spec/Section 6 -- Execution.md | 44 +++++++++++++++++++++------------- 1 file changed, 27 insertions(+), 17 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 5b8594e30..0e1cd2edc 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -134,12 +134,8 @@ ExecuteQuery(query, schema, variableValues, initialValue): - Let {queryType} be the root Query type in {schema}. - Assert: {queryType} is an Object type. - Let {selectionSet} be the top level selection set in {query}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - queryType, initialValue, variableValues)} _normally_ (allowing - parallelization). -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, queryType, + selectionSet)}. ### Mutation @@ -156,11 +152,8 @@ ExecuteMutation(mutation, schema, variableValues, initialValue): - Let {mutationType} be the root Mutation type in {schema}. - Assert: {mutationType} is an Object type. - Let {selectionSet} be the top level selection set in {mutation}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - mutationType, initialValue, variableValues)} _serially_. -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, mutationType, + selectionSet, true)}. ### Subscription @@ -304,12 +297,8 @@ ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue): - Let {subscriptionType} be the root Subscription type in {schema}. - Assert: {subscriptionType} is an Object type. - Let {selectionSet} be the top level selection set in {subscription}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - subscriptionType, initialValue, variableValues)} _normally_ (allowing - parallelization). -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, + subscriptionType, selectionSet)}. Note: The {ExecuteSubscriptionEvent()} algorithm is intentionally similar to {ExecuteQuery()} since this is how each event result is produced. @@ -325,6 +314,27 @@ Unsubscribe(responseStream): - Cancel {responseStream}. +## Executing the Root Selection Set + +To execute the root selection set, the object value being evaluated and the +object type need to be known, as well as whether it must be executed serially, +or may be executed in parallel. + +Executing the root selection set works similarly for queries (parallel), +mutations (serial), and subscriptions (where it is executed for each event in +the underlying Source Stream). + +ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, +serial): + +- If {serial} is not provided, initialize it to {false}. +- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, + objectType, initialValue, variableValues)} _serially_ if {serial} is {true}, + _normally_ (allowing parallelization) otherwise. +- Let {errors} be the list of all _field error_ raised while executing the + selection set. +- Return an unordered map containing {data} and {errors}. + ## Executing Selection Sets To execute a _selection set_, the object value being evaluated and the object From c4eda2d262a5374c6724fe48f75b9182b71fa149 Mon Sep 17 00:00:00 2001 From: Benjie Gillam Date: Fri, 28 Apr 2023 17:20:43 +0100 Subject: [PATCH 02/39] Change ExecuteSelectionSet to ExecuteGroupedFieldSet --- spec/Section 6 -- Execution.md | 49 ++++++++++++++++++++-------------- 1 file changed, 29 insertions(+), 20 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 0e1cd2edc..5da841d16 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -324,31 +324,34 @@ Executing the root selection set works similarly for queries (parallel), mutations (serial), and subscriptions (where it is executed for each event in the underlying Source Stream). +First, the selection set is turned into a grouped field set; then, we execute +this grouped field set and return the resulting {data} and {errors}. + ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, serial): - If {serial} is not provided, initialize it to {false}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, +- Let {groupedFieldSet} be the result of {CollectFields(objectType, + selectionSet, variableValues)}. +- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, initialValue, variableValues)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {errors} be the list of all _field error_ raised while executing the selection set. - Return an unordered map containing {data} and {errors}. -## Executing Selection Sets +## Executing a Grouped Field Set -To execute a _selection set_, the object value being evaluated and the object +To execute a grouped field set, the object value being evaluated and the object type need to be known, as well as whether it must be executed serially, or may be executed in parallel. -First, the selection set is turned into a grouped field set; then, each -represented field in the grouped field set produces an entry into a response -map. +Each represented field in the grouped field set produces an entry into a +response map. -ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues): +ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, +variableValues): -- Let {groupedFieldSet} be the result of {CollectFields(objectType, - selectionSet, variableValues)}. - Initialize {resultMap} to an empty ordered map. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value @@ -366,8 +369,8 @@ is explained in greater detail in the Field Collection section below. **Errors and Non-Null Fields** -If during {ExecuteSelectionSet()} a field with a non-null {fieldType} raises a -_field error_ then that error must propagate to this entire selection set, +If during {ExecuteGroupedFieldSet()} a field with a non-null {fieldType} raises +a _field error_ then that error must propagate to this entire selection set, either resolving to {null} if allowed or further propagated to a parent field. If this occurs, any sibling fields which have not yet executed or have not yet @@ -707,8 +710,9 @@ CompleteValue(fieldType, fields, result, variableValues): - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {subSelectionSet} be the result of calling {MergeSelectionSets(fields)}. - - Return the result of evaluating {ExecuteSelectionSet(subSelectionSet, + - Let {groupedFieldSet} be the result of calling {CollectSubfields(objectType, + fields, variableValues)}. + - Return the result of evaluating {ExecuteGroupedFieldSet(groupedFieldSet, objectType, result, variableValues)} _normally_ (allowing for parallelization). @@ -755,9 +759,9 @@ ResolveAbstractType(abstractType, objectValue): **Merging Selection Sets** -When more than one field of the same name is executed in parallel, the -_selection set_ for each of the fields are merged together when completing the -value in order to continue execution of the sub-selection sets. +When more than one field of the same name is executed in parallel, during value +completion their selection sets are collected together to produce a single +grouped field set in order to continue execution of the sub-selection sets. An example operation illustrating parallel fields with the same name with sub-selections. @@ -776,14 +780,19 @@ sub-selections. After resolving the value for `me`, the selection sets are merged together so `firstName` and `lastName` can be resolved for one value. -MergeSelectionSets(fields): +CollectSubfields(objectType, fields, variableValues): -- Let {selectionSet} be an empty list. +- Let {groupedFieldSet} be an empty map. - For each {field} in {fields}: - Let {fieldSelectionSet} be the selection set of {field}. - If {fieldSelectionSet} is null or empty, continue to the next field. - - Append all selections in {fieldSelectionSet} to {selectionSet}. -- Return {selectionSet}. + - Let {subGroupedFieldSet} be the result of {CollectFields(objectType, + fieldSelectionSet, variableValues)}. + - For each {subGroupedFieldSet} as {responseKey} and {subfields}: + - Let {groupForResponseKey} be the list in {groupedFieldSet} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all fields in {subfields} to {groupForResponseKey}. +- Return {groupedFieldSet}. ### Handling Field Errors From 018dac8025e2e7f0aab5f1fc02c0db15f96524e4 Mon Sep 17 00:00:00 2001 From: Benjie Gillam Date: Mon, 21 Aug 2023 12:15:34 +0100 Subject: [PATCH 03/39] Correct reference to MergeSelectionSets --- spec/Section 5 -- Validation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index 75af96ffd..969d99f88 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -467,7 +467,7 @@ unambiguous. Therefore any two field selections which might both be encountered for the same object are only valid if they are equivalent. During execution, the simultaneous execution of fields with the same response -name is accomplished by {MergeSelectionSets()} and {CollectFields()}. +name is accomplished by {CollectSubfields()}. For simple hand-written GraphQL, this rule is obviously a clear developer error, however nested fragments can make this difficult to detect manually. From a620dc1f9ecec6ad7fb5c44b083fe8614bed4025 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 15 Feb 2024 22:23:30 +0200 Subject: [PATCH 04/39] moves Field Collection section earlier --- spec/Section 6 -- Execution.md | 212 ++++++++++++++++----------------- 1 file changed, 106 insertions(+), 106 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 5da841d16..1232bd242 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -340,6 +340,112 @@ serial): selection set. - Return an unordered map containing {data} and {errors}. +### Field Collection + +Before execution, the _selection set_ is converted to a grouped field set by +calling {CollectFields()}. Each entry in the grouped field set is a list of +fields that share a response key (the alias if defined, otherwise the field +name). This ensures all fields with the same response key (including those in +referenced fragments) are executed at the same time. + +As an example, collecting the fields of this selection set would collect two +instances of the field `a` and one of field `b`: + +```graphql example +{ + a { + subfield1 + } + ...ExampleFragment +} + +fragment ExampleFragment on Query { + a { + subfield2 + } + b +} +``` + +The depth-first-search order of the field groups produced by {CollectFields()} +is maintained through execution, ensuring that fields appear in the executed +response in a stable and predictable order. + +CollectFields(objectType, selectionSet, variableValues, visitedFragments): + +- If {visitedFragments} is not provided, initialize it to the empty set. +- Initialize {groupedFields} to an empty ordered map of lists. +- For each {selection} in {selectionSet}: + - If {selection} provides the directive `@skip`, let {skipDirective} be that + directive. + - If {skipDirective}'s {if} argument is {true} or is a variable in + {variableValues} with the value {true}, continue with the next {selection} + in {selectionSet}. + - If {selection} provides the directive `@include`, let {includeDirective} be + that directive. + - If {includeDirective}'s {if} argument is not {true} and is not a variable + in {variableValues} with the value {true}, continue with the next + {selection} in {selectionSet}. + - If {selection} is a {Field}: + - Let {responseKey} be the response key of {selection} (the alias if + defined, otherwise the field name). + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append {selection} to the {groupForResponseKey}. + - If {selection} is a {FragmentSpread}: + - Let {fragmentSpreadName} be the name of {selection}. + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. + - Let {fragment} be the Fragment in the current Document whose name is + {fragmentSpreadName}. + - If no such {fragment} exists, continue with the next {selection} in + {selectionSet}. + - Let {fragmentType} be the type condition on {fragment}. + - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue + with the next {selection} in {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. + - Let {fragmentGroupedFieldSet} be the result of calling + {CollectFields(objectType, fragmentSelectionSet, variableValues, + visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. + - If {selection} is an {InlineFragment}: + - Let {fragmentType} be the type condition on {selection}. + - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, + fragmentType)} is {false}, continue with the next {selection} in + {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {selection}. + - Let {fragmentGroupedFieldSet} be the result of calling + {CollectFields(objectType, fragmentSelectionSet, variableValues, + visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. +- Return {groupedFields}. + +DoesFragmentTypeApply(objectType, fragmentType): + +- If {fragmentType} is an Object Type: + - If {objectType} and {fragmentType} are the same type, return {true}, + otherwise return {false}. +- If {fragmentType} is an Interface Type: + - If {objectType} is an implementation of {fragmentType}, return {true} + otherwise return {false}. +- If {fragmentType} is a Union: + - If {objectType} is a possible type of {fragmentType}, return {true} + otherwise return {false}. + +Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` +directives may be applied in either order since they apply commutatively. + ## Executing a Grouped Field Set To execute a grouped field set, the object value being evaluated and the object @@ -477,112 +583,6 @@ A correct executor must generate the following result for that _selection set_: } ``` -### Field Collection - -Before execution, the _selection set_ is converted to a grouped field set by -calling {CollectFields()}. Each entry in the grouped field set is a list of -fields that share a response key (the alias if defined, otherwise the field -name). This ensures all fields with the same response key (including those in -referenced fragments) are executed at the same time. - -As an example, collecting the fields of this selection set would collect two -instances of the field `a` and one of field `b`: - -```graphql example -{ - a { - subfield1 - } - ...ExampleFragment -} - -fragment ExampleFragment on Query { - a { - subfield2 - } - b -} -``` - -The depth-first-search order of the field groups produced by {CollectFields()} -is maintained through execution, ensuring that fields appear in the executed -response in a stable and predictable order. - -CollectFields(objectType, selectionSet, variableValues, visitedFragments): - -- If {visitedFragments} is not provided, initialize it to the empty set. -- Initialize {groupedFields} to an empty ordered map of lists. -- For each {selection} in {selectionSet}: - - If {selection} provides the directive `@skip`, let {skipDirective} be that - directive. - - If {skipDirective}'s {if} argument is {true} or is a variable in - {variableValues} with the value {true}, continue with the next {selection} - in {selectionSet}. - - If {selection} provides the directive `@include`, let {includeDirective} be - that directive. - - If {includeDirective}'s {if} argument is not {true} and is not a variable - in {variableValues} with the value {true}, continue with the next - {selection} in {selectionSet}. - - If {selection} is a {Field}: - - Let {responseKey} be the response key of {selection} (the alias if - defined, otherwise the field name). - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append {selection} to the {groupForResponseKey}. - - If {selection} is a {FragmentSpread}: - - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. - - Let {fragment} be the Fragment in the current Document whose name is - {fragmentSpreadName}. - - If no such {fragment} exists, continue with the next {selection} in - {selectionSet}. - - Let {fragmentType} be the type condition on {fragment}. - - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue - with the next {selection} in {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. - - If {selection} is an {InlineFragment}: - - Let {fragmentType} be the type condition on {selection}. - - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, - fragmentType)} is {false}, continue with the next {selection} in - {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. -- Return {groupedFields}. - -DoesFragmentTypeApply(objectType, fragmentType): - -- If {fragmentType} is an Object Type: - - If {objectType} and {fragmentType} are the same type, return {true}, - otherwise return {false}. -- If {fragmentType} is an Interface Type: - - If {objectType} is an implementation of {fragmentType}, return {true} - otherwise return {false}. -- If {fragmentType} is a Union: - - If {objectType} is a possible type of {fragmentType}, return {true} - otherwise return {false}. - -Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` -directives may be applied in either order since they apply commutatively. - ## Executing Fields Each field requested in the grouped field set that is defined on the selected From 4e6df5c222249da6d4dd57c3e87d06780f5984df Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 15 Feb 2024 22:30:17 +0200 Subject: [PATCH 05/39] Introduce `@defer` directive --- spec/Section 6 -- Execution.md | 383 ++++++++++++++++++++++++++++----- 1 file changed, 332 insertions(+), 51 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 1232bd242..e4be2282e 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -255,12 +255,13 @@ CreateSourceEventStream(subscription, schema, variableValues, initialValue): - Let {groupedFieldSet} be the result of {CollectFields(subscriptionType, selectionSet, variableValues)}. - If {groupedFieldSet} does not have exactly one entry, raise a _request error_. -- Let {fields} be the value of the first entry in {groupedFieldSet}. -- Let {fieldName} be the name of the first entry in {fields}. Note: This value - is unaffected if an alias is used. -- Let {field} be the first entry in {fields}. +- Let {fieldDetailsList} be the value of the first entry in {groupedFieldSet}. +- Let {fieldDetails} be the first entry in {fieldDetailsList}. +- Let {field} be the corresponding entry on {fieldDetails}. +- Let {fieldName} be the name of {field}. Note: This value is unaffected if an + alias is used. - Let {argumentValues} be the result of {CoerceArgumentValues(subscriptionType, - field, variableValues)}. + node, variableValues)}. - Let {fieldStream} be the result of running {ResolveFieldEventStream(subscriptionType, initialValue, fieldName, argumentValues)}. @@ -331,14 +332,142 @@ ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, serial): - If {serial} is not provided, initialize it to {false}. -- Let {groupedFieldSet} be the result of {CollectFields(objectType, - selectionSet, variableValues)}. -- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, - objectType, initialValue, variableValues)} _serially_ if {serial} is {true}, - _normally_ (allowing parallelization) otherwise. -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Let {groupedFieldSet} and {newDeferUsages} be the result of + {CollectFields(objectType, selectionSet, variableValues)}. +- Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet)}. +- Let {data} and {incrementalDataRecords} be the result of + {ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, initialValue, + variableValues, serial)}. +- Let {errors} be the list of all _field error_ raised while completing {data}. +- If {incrementalDataRecords} is empty, return an unordered map containing + {data} and {errors}. +- Let {incrementalResults} be the result of {YieldIncrementalResults(data, + errors, incrementalDataRecords)}. +- Wait for the first result in {incrementalResults} to be available. +- Let {initialResult} be that result. +- Return {initialResult} and {BatchIncrementalResults(incrementalResults)}. + +### Yielding Incremental Results + +The procedure for yielding incremental results is specified by the +{YieldIncrementalResults()} algorithm. + +YieldIncrementalResults(data, errors, incrementalDataRecords): + +- Initialize {graph} to an empty directed acyclic graph. +- For each {incrementalDataRecord} of {incrementalDataRecords}: + - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed + from the {pendingResults} that it completes, adding each of {pendingResults} + to {graph} as new nodes, if necessary, each directed from its {parent}, if + defined, recursively adding each {parent} as necessary. +- Prune root nodes of {graph} containing no direct child Incremental Data + Records, repeatedly if necessary, promoting any direct child Deferred + Fragments of the pruned nodes to root nodes. (This ensures that no empty + fragments are reported as pending). +- Let {newPendingResults} be the set of root nodes in {graph}. +- Let {pending} be the result of {GetPending(newPendingResults)}. +- Let {hasNext} be {true}. +- Yield an unordered map containing {data}, {errors}, {pending}, and {hasNext}. +- For each completed child Pending Incremental Data node of a root node in + {graph}: + - Let {incrementalDataRecord} be the Pending Incremental Data for that node; + let {result} be the corresponding completed result. + - If {data} on {result} is {null}: + - Initialize {completed} to an empty list. + - Let {parents} be the parent nodes of {deferredGroupedFieldSetRecord}. + - Initialize {completed} to an empty list. + - For each {pendingResult} of {parents}: + - Append {GetCompletedEntry(parent, errors)} to {completed}. + - Remove {pendingResult} and all of its descendant nodes from {graph}, + except for any descendant Incremental Data Record nodes with other + parents. + - Let {hasNext} be {false}, if {graph} is empty. + - Yield an unordered map containing {completed} and {hasNext}. + - Continue to the next completed child Incremental Data node in {graph}. + - Replace {node} in {graph} with a new node corresponding to the Completed + Incremental Data for {result}. + - Add each {incrementalDataRecord} of {incrementalDataRecords} on {result} to + {graph} via the same procedure as above. + - Let {completedDeferredFragments} be the set of root nodes in {graph} without + any child Pending Data nodes. + - Let {completedIncrementalDataNodes} be the set of completed Incremental Data + nodes that are children of {completedDeferredFragments}. + - If {completedIncrementalDataNodes} is empty, continue to the next completed + child Incremental Data node in {graph}. + - Initialize {incremental} to an empty list. + - For each {node} of {completedIncrementalDataNodes}: + - Let {incrementalDataRecord} be the corresponding record for {node}. + - Append {GetIncrementalEntry(incrementalDataRecord, graph)} to + {incremental}. + - Remove {node} from {graph}. + - Initialize {completed} to an empty list. + - For each {pendingResult} of {completedDeferredFragments}: + - Append {GetCompletedEntry(pendingResult)} to {completed}. + - Remove {pendingResult} from {graph}, promoting its child nodes to root + nodes. + - Prune root nodes of {graph} containing no direct child Incremental Data + Records, as above. + - Let {hasNext} be {false} if {graph} is empty. + - Let {incrementalResult} be an unordered map containing {hasNext}. + - If {incremental} is not empty, set the corresponding entry on + {incrementalResult} to {incremental}. + - If {completed} is not empty, set the corresponding entry on + {incrementalResult} to {completed}. + - Let {newPendingResults} be the set of new root nodes in {graph}, promoted by + the above steps. + - If {newPendingResults} is not empty: + - Let {pending} be the result of {GetPending(newPendingResults)}. + - Set the corresponding entry on {incrementalResult} to {pending}. + - Yield {incrementalResult}. +- Complete this incremental result stream. + +GetPending(newPendingResults): + +- Initialize {pending} to an empty list. +- For each {newPendingResult} of {newPendingResults}: + - Let {id} be a unique identifier for {newPendingResult}. + - Let {path} and {label} be the corresponding entries on {newPendingResult}. + - Let {pendingEntry} be an unordered map containing {id}, {path}, and {label}. + - Append {pendingEntry} to {pending}. +- Return {pending}. + +GetIncrementalEntry(incrementalDataRecord, graph): + +- Let {deferredFragments} be the Deferred Fragments incrementally completed by + {incrementalDataRecord} at {path}. +- Let {result} be the result of {incrementalDataRecord}. +- Let {data} and {errors} be the corresponding entries on {result}. +- Let {releasedDeferredFragments} be the members of {deferredFragments} that are + root nodes in {graph}. +- Let {bestDeferredFragment} be the member of {releasedDeferredFragments} with + the shortest {path} entry. +- Let {subPath} be the portion of {path} not contained by the {path} entry of + {bestDeferredFragment}. +- Let {id} be the unique identifier for {bestDeferredFragment}. +- Return an unordered map containing {id}, {subPath}, {data}, and {errors}. + +GetCompletedEntry(pendingResult, errors): + +- Let {id} be the unique identifier for {pendingResult}. +- Let {completedEntry} be an unordered map containing {id}. +- If {errors} is not empty, set the corresponding entry on {completedEntry} to + {errors}. +- Return {completedEntry}. + +### Batching Incremental Results + +BatchIncrementalResults(incrementalResults): + +- Return a new stream {batchedIncrementalResults} which yields events as + follows: +- While {incrementalResults} is not closed: + - Let {availableIncrementalResults} be a list of one or more Incremental + Results available on {incrementalResults}. + - Let {batchedIncrementalResult} be an unordered map created by merging the + items in {availableIncrementalResults} into a single unordered map, + concatenating list entries as necessary, and setting {hasNext} to the value + of {hasNext} on the final item in the list. + - Yield {batchedIncrementalResult}. ### Field Collection @@ -371,10 +500,12 @@ The depth-first-search order of the field groups produced by {CollectFields()} is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. -CollectFields(objectType, selectionSet, variableValues, visitedFragments): +CollectFields(objectType, selectionSet, variableValues, deferUsage, +visitedFragments): - If {visitedFragments} is not provided, initialize it to the empty set. - Initialize {groupedFields} to an empty ordered map of lists. +- Initialize {newDeferUsages} to an empty list. - For each {selection} in {selectionSet}: - If {selection} provides the directive `@skip`, let {skipDirective} be that directive. @@ -389,14 +520,24 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {selection} is a {Field}: - Let {responseKey} be the response key of {selection} (the alias if defined, otherwise the field name). + - Let {fieldDetails} be a new unordered map containing {deferUsage}. + - Set the entry for {field} on {fieldDetails} to {selection}. and + {deferUsage}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - - Append {selection} to the {groupForResponseKey}. + - Append {fieldDetails} to the {groupForResponseKey}. - If {selection} is a {FragmentSpread}: - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. + - If {fragmentSpreadName} provides the directive `@defer` and its {if} + argument is not {false} and is not a variable in {variableValues} with the + value {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is not defined: + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. - Let {fragment} be the Fragment in the current Document whose name is {fragmentSpreadName}. - If no such {fragment} exists, continue with the next {selection} in @@ -405,31 +546,45 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. + - If {deferDirective} is defined, let {fragmentDeferUsage} be + {deferDirective} and append it to {newDeferUsages}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. - If {selection} is an {InlineFragment}: - Let {fragmentType} be the type condition on {selection}. - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. + - If {InlineFragment} provides the directive `@defer` and its {if} argument + is not {false} and is not a variable in {variableValues} with the value + {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is defined, let {fragmentDeferUsage} be + {deferDirective} and append it to {newDeferUsages}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. -- Return {groupedFields}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. +- Return {groupedFields} and {newDeferUsages}. DoesFragmentTypeApply(objectType, fragmentType): @@ -446,6 +601,105 @@ DoesFragmentTypeApply(objectType, fragmentType): Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` directives may be applied in either order since they apply commutatively. +### Field Plan Generation + +BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): + +- If {parentDeferUsages} is not provided, initialize it to the empty set. +- Initialize {fieldPlan} to an empty ordered map. +- For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: + - Let {deferUsageSet} be the result of + {GetDeferUsageSet(groupForResponseKey)}. + - Let {groupedFieldSet} be the entry in {fieldPlan} for any equivalent set to + {deferUsageSet}; if no such map exists, create it as an empty ordered map. + - Set the entry for {responseKey} in {groupedFieldSet} to + {groupForResponseKey}. +- Return {fieldPlan}. + +GetDeferUsageSet(fieldDetailsList): + +- Let {deferUsageSet} be the set containing the {deferUsage} entry from each + item in {fieldDetailsList}. +- For each {deferUsage} of {deferUsageSet}: + - Let {ancestors} be the set of {deferUsage} entries that are ancestors of + {deferUsage}, collected by recursively following the {parent} entry on + {deferUsage}. + - If any of {ancestors} is contained by {deferUsageSet}, remove {deferUsage} + from {deferUsageSet}. +- Return {deferUsageSet}. + +## Executing a Field Plan + +To execute a field plan, the object value being evaluated and the object type +need to be known, as well as whether the non-deferred grouped field set must be +executed serially, or may be executed in parallel. + +ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, objectValue, +variableValues, serial, path, deferUsageSet, deferMap): + +- If {path} is not provided, initialize it to an empty list. +- Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, + deferMap)}. +- Let {groupedFieldSet} be the entry in {fieldPlan} for the set equivalent to + {deferUsageSet}. +- Let {newGroupedFieldSets} be the remaining portion of {fieldPlan}. +- Allowing for parallelization, perform the following steps: + - Let {data} and {nestedIncrementalDataRecords} be the result of running + {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is + {true}, _normally_ (allowing parallelization) otherwise. + - Let {incrementalDataRecords} be the result of + {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, + newGroupedFieldSets, path, newDeferMap)}. +- Append all items in {nestedIncrementalDataRecords} to + {incrementalDataRecords}. +- Return {data} and {incrementalDataRecords}. + +GetNewDeferMap(newDeferUsages, path, deferMap): + +- If {newDeferUsages} is empty, return {deferMap}: +- Let {newDeferMap} be a new unordered map containing all entries in {deferMap}. +- For each {deferUsage} in {newDeferUsages}: + - Let {parentDeferUsage} and {label} be the corresponding entries on + {deferUsage}. + - Let {parent} be the entry in {deferMap} for {parentDeferUsage}. + - Let {newDeferredFragment} be an unordered map containing {parent}, {path} + and {label}. + - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. +- Return {newDeferMap}. + +ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, +newGroupedFieldSets, path, deferMap): + +- Initialize {incrementalDataRecords} to an empty list. +- For each {deferUsageSet} and {groupedFieldSet} in {newGroupedFieldSets}: + - Let {deferredFragments} be an empty list. + - For each {deferUsage} in {deferUsageSet}: + - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. + - Append {deferredFragment} to {deferredFragments}. + - Let {incrementalDataRecord} represent the future execution of + {ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, deferredFragments, path, deferUsageSet, deferMap)}, + incrementally completing {deferredFragments} at {path}. + - Append {incrementalDataRecord} to {incrementalDataRecords}. + - Schedule initiation of execution of {incrementalDataRecord} following any + implementation specific deferral. +- Return {incrementalDataRecords}. + +Note: {incrementalDataRecord} can be safely initiated without blocking +higher-priority data once any of {deferredFragments} are released as pending. + +ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, +variableValues, path, deferUsageSet, deferMap): + +- Let {data} and {incrementalDataRecords} be the result of running + {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, path, deferUsageSet, deferMap)} _normally_ (allowing + parallelization). +- Let {errors} be the list of all _field error_ raised while completing {data}. +- Return an unordered map containing {data}, {errors}, and + {incrementalDataRecords}. + ## Executing a Grouped Field Set To execute a grouped field set, the object value being evaluated and the object @@ -455,23 +709,27 @@ be executed in parallel. Each represented field in the grouped field set produces an entry into a response map. -ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, -variableValues): +ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, +path, deferUsageSet, deferMap): - Initialize {resultMap} to an empty ordered map. +- Initialize {incrementalDataRecords} to an empty list. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value is unaffected if an alias is used. - Let {fieldType} be the return type defined for the field {fieldName} of {objectType}. - If {fieldType} is defined: - - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, - fields, variableValues)}. + - Let {responseValue} and {fieldIncrementalDataRecords} be the result of + {ExecuteField(objectType, objectValue, fieldType, fields, variableValues, + path)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. -- Return {resultMap}. + - Append all items in {fieldIncrementalDataRecords} to + {incrementalDataRecords}. +- Return {resultMap} and {incrementalDataRecords}. Note: {resultMap} is ordered by which fields appear first in the operation. This -is explained in greater detail in the Field Collection section below. +is explained in greater detail in the Field Collection section above. **Errors and Non-Null Fields** @@ -591,16 +849,19 @@ coerces any provided argument values, then resolves a value for the field, and finally completes that value either by recursively executing another selection set or coercing a scalar value. -ExecuteField(objectType, objectValue, fieldType, fields, variableValues): +ExecuteField(objectType, objectValue, fieldType, fieldDetailsList, +variableValues, path, deferUsageSet, deferMap): -- Let {field} be the first entry in {fields}. +- Let {fieldDetails} be the first entry in {fieldDetailsList}. +- Let {field} be the corresponding entry on {fieldDetails}. - Let {fieldName} be the field name of {field}. +- Append {fieldName} to {path}. - Let {argumentValues} be the result of {CoerceArgumentValues(objectType, field, variableValues)}. - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. - Return the result of {CompleteValue(fieldType, fields, resolvedValue, - variableValues)}. + variableValues, path, deferUsageSet, deferMap)}. ### Coercing Field Arguments @@ -687,22 +948,22 @@ After resolving the value for a field, it is completed by ensuring it adheres to the expected return type. If the return type is another Object type, then the field execution process continues recursively. -CompleteValue(fieldType, fields, result, variableValues): +CompleteValue(fieldType, fieldDetailsList, result, variableValues, path, +deferUsageSet, deferMap): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. - - Let {completedResult} be the result of calling {CompleteValue(innerType, - fields, result, variableValues)}. + - Let {completedResult} and {incrementalDataRecords} be the result of calling + {CompleteValue(innerType, fields, result, variableValues, path)}. - If {completedResult} is {null}, raise a _field error_. - - Return {completedResult}. + - Return {completedResult} and {incrementalDataRecords}. - If {result} is {null} (or another internal value similar to {null} such as {undefined}), return {null}. - If {fieldType} is a List type: - If {result} is not a collection of values, raise a _field error_. - Let {innerType} be the inner type of {fieldType}. - - Return a list where each list item is the result of calling - {CompleteValue(innerType, fields, resultItem, variableValues)}, where - {resultItem} is each item in {result}. + - Return the result of {CompleteListValue(innerType, fieldDetailsList, result, + variableValues, path, deferUsageSet, deferMap)}. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: @@ -710,11 +971,28 @@ CompleteValue(fieldType, fields, result, variableValues): - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {groupedFieldSet} be the result of calling {CollectSubfields(objectType, - fields, variableValues)}. - - Return the result of evaluating {ExecuteGroupedFieldSet(groupedFieldSet, - objectType, result, variableValues)} _normally_ (allowing for - parallelization). + - Let {groupedFieldSet} and {newDeferUsages} be the result of calling + {CollectSubfields(objectType, fieldDetailsList, variableValues)}. + - Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet, + deferUsageSet)}. + - Return the result of {ExecuteFieldPlan(newDeferUsages, fieldPlan, + objectType, result, variableValues, false, path, deferUsageSet, deferMap)}. + +CompleteListValue(innerType, fieldDetailsList, result, variableValues, path, +deferUsageSet, deferMap): + +- Initialize {items} and {incrementalDataRecords} to empty lists. +- Let {index} be {0}. +- For each {resultItem} of {result}: + - Let {itemPath} be {path} with {index} appended. + - Let {completedItem} and {itemIncrementalDataRecords} be the result of + calling {CompleteValue(innerType, fieldDetailsList, item, variableValues, + itemPath)}. + - Append {completedItem} to {items}. + - Append all items in {itemIncrementalDataRecords} to + {incrementalDataRecords}. + - Increment {index} by {1}. +- Return {items} and {incrementalDataRecords}. **Coercing Results** @@ -780,18 +1058,21 @@ sub-selections. After resolving the value for `me`, the selection sets are merged together so `firstName` and `lastName` can be resolved for one value. -CollectSubfields(objectType, fields, variableValues): +CollectSubfields(objectType, fieldDetailsList, variableValues): -- Let {groupedFieldSet} be an empty map. -- For each {field} in {fields}: +- Initialize {groupedFieldSet} to an empty ordered map of lists. +- Initialize {newDeferUsages} to an empty list. +- For each {fieldDetails} in {fieldDetailsList}: + - Let {field} and {deferUsage} be the corresponding entries on {fieldDetails}. - Let {fieldSelectionSet} be the selection set of {field}. - If {fieldSelectionSet} is null or empty, continue to the next field. - - Let {subGroupedFieldSet} be the result of {CollectFields(objectType, - fieldSelectionSet, variableValues)}. + - Let {subGroupedFieldSet} and {subNewDeferUsages} be the result of + {CollectFields(objectType, fieldSelectionSet, variableValues, deferUsage)}. - For each {subGroupedFieldSet} as {responseKey} and {subfields}: - Let {groupForResponseKey} be the list in {groupedFieldSet} for {responseKey}; if no such list exists, create it as an empty list. - Append all fields in {subfields} to {groupForResponseKey}. + - Append all defer usages in {subNewDeferUsages} to {newDeferUsages}. - Return {groupedFieldSet}. ### Handling Field Errors From e09dcf2f64ccfc918d2cd079c374c85f7b48dff9 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 13 Jun 2024 15:04:00 +0300 Subject: [PATCH 06/39] refactor a few lines out of YieldSubsequentResults --- spec/Section 6 -- Execution.md | 80 ++++++++++++++++++++++------------ 1 file changed, 51 insertions(+), 29 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index e4be2282e..3fe39075b 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -359,15 +359,12 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed from the {pendingResults} that it completes, adding each of {pendingResults} to {graph} as new nodes, if necessary, each directed from its {parent}, if - defined, recursively adding each {parent} as necessary. -- Prune root nodes of {graph} containing no direct child Incremental Data - Records, repeatedly if necessary, promoting any direct child Deferred - Fragments of the pruned nodes to root nodes. (This ensures that no empty - fragments are reported as pending). -- Let {newPendingResults} be the set of root nodes in {graph}. -- Let {pending} be the result of {GetPending(newPendingResults)}. -- Let {hasNext} be {true}. -- Yield an unordered map containing {data}, {errors}, {pending}, and {hasNext}. + defined, recursively adding each {parent} as necessary until + {incrementalDataRecord} is connected to {graph}. +- Let {pendingResults} be the result of {GetNonEmptyNewPending(graph)}. +- Prune root nodes from {graph} not in {pendingResults}, repeating as necessary + until all root nodes in {graph} are also in {pendingResults}. +- Yield the result of {GetInitialResult(data, errors, pending)}. - For each completed child Pending Incremental Data node of a root node in {graph}: - Let {incrementalDataRecord} be the Pending Incremental Data for that node; @@ -383,7 +380,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): parents. - Let {hasNext} be {false}, if {graph} is empty. - Yield an unordered map containing {completed} and {hasNext}. - - Continue to the next completed child Incremental Data node in {graph}. + - Continue to the next completed Pending Incremental Data node. - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. - Add each {incrementalDataRecord} of {incrementalDataRecords} on {result} to @@ -393,7 +390,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Let {completedIncrementalDataNodes} be the set of completed Incremental Data nodes that are children of {completedDeferredFragments}. - If {completedIncrementalDataNodes} is empty, continue to the next completed - child Incremental Data node in {graph}. + Pending Incremental Data Node. - Initialize {incremental} to an empty list. - For each {node} of {completedIncrementalDataNodes}: - Let {incrementalDataRecord} be the corresponding record for {node}. @@ -405,32 +402,57 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Append {GetCompletedEntry(pendingResult)} to {completed}. - Remove {pendingResult} from {graph}, promoting its child nodes to root nodes. - - Prune root nodes of {graph} containing no direct child Incremental Data - Records, as above. - - Let {hasNext} be {false} if {graph} is empty. - - Let {incrementalResult} be an unordered map containing {hasNext}. - - If {incremental} is not empty, set the corresponding entry on - {incrementalResult} to {incremental}. - - If {completed} is not empty, set the corresponding entry on - {incrementalResult} to {completed}. - - Let {newPendingResults} be the set of new root nodes in {graph}, promoted by - the above steps. - - If {newPendingResults} is not empty: - - Let {pending} be the result of {GetPending(newPendingResults)}. - - Set the corresponding entry on {incrementalResult} to {pending}. - - Yield {incrementalResult}. + - Let {newPendingResults} be a new set containing the result of + {GetNonEmptyNewPending(graph, pendingResults)}. + - Add all nodes in {newPendingResults} to {pendingResults}. + - Prune root nodes from {graph} not in {pendingResults}, repeating as + necessary until all root nodes in {graph} are also in {pendingResults}. + - Let {pending} be the result of {GetPendingEntry(newPendingResults)}. + - Yield the result of {GetIncrementalResult(graph, incremental, completed, + pending)}. - Complete this incremental result stream. -GetPending(newPendingResults): +GetNonEmptyNewPending(graph, oldPendingResults): + +- If not provided, initialize {oldPendingResults} to the empty set. +- Let {rootNodes} be the set of root nodes in {graph}. +- For each {rootNode} of {rootNodes}: + - If {rootNodes} is in {oldPendingResults}: + - Continue to the next {rootNode}. + - If {rootNode} has no children Pending Incremental Data nodes: + - Let {children} be the set of child Deferred Fragment nodes of {rootNode}. + - Remove {rootNode} from {rootNodes}. + - Add each of the nodes in {children} to {rootNodes}. +- Return {rootNodes}. + +GetInitialResult(data, errors, pendingResults): + +- Let {pending} be the result of {GetPendingEntry(pendingResults)}. +- Let {hasNext} be {true}. +- Return an unordered map containing {data}, {errors}, {pending}, and {hasNext}. + +GetPendingEntry(pendingResults): - Initialize {pending} to an empty list. -- For each {newPendingResult} of {newPendingResults}: - - Let {id} be a unique identifier for {newPendingResult}. - - Let {path} and {label} be the corresponding entries on {newPendingResult}. +- For each {pendingResult} of {pendingResult}: + - Let {id} be a unique identifier for {pendingResult}. + - Let {path} and {label} be the corresponding entries on {pendingResult}. - Let {pendingEntry} be an unordered map containing {id}, {path}, and {label}. - Append {pendingEntry} to {pending}. - Return {pending}. +GetIncrementalResult(graph, incremental, completed, pending): + +- Let {hasNext} be {false} if {graph} is empty, otherwise, {true}. +- Let {incrementalResult} be an unordered map containing {hasNext}. +- If {incremental} is not empty: + - Set the corresponding entry on {incrementalResult} to {incremental}. +- If {completed} is not empty: + - Set the corresponding entry on {incrementalResult} to {completed}. +- If {pending} is not empty: + - Set the corresponding entry on {incrementalResult} to {pending}. +- Return {incrementalResult}. + GetIncrementalEntry(incrementalDataRecord, graph): - Let {deferredFragments} be the Deferred Fragments incrementally completed by From d4934b31baf0445243b6859134fda6182bece7b0 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 18 Jun 2024 22:37:22 +0300 Subject: [PATCH 07/39] add a word or two about which child nodes are being promoted --- spec/Section 6 -- Execution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 3fe39075b..0250ec06c 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -400,8 +400,8 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Initialize {completed} to an empty list. - For each {pendingResult} of {completedDeferredFragments}: - Append {GetCompletedEntry(pendingResult)} to {completed}. - - Remove {pendingResult} from {graph}, promoting its child nodes to root - nodes. + - Remove {pendingResult} from {graph}, promoting its child Deferred Fragment + nodes to root nodes. - Let {newPendingResults} be a new set containing the result of {GetNonEmptyNewPending(graph, pendingResults)}. - Add all nodes in {newPendingResults} to {pendingResults}. From fd3987ce71b0563ce40251beb5abd3c96eff5bef Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 18 Jun 2024 22:58:32 +0300 Subject: [PATCH 08/39] be more graphy --- spec/Section 6 -- Execution.md | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 0250ec06c..76edeeebb 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -354,17 +354,10 @@ The procedure for yielding incremental results is specified by the YieldIncrementalResults(data, errors, incrementalDataRecords): -- Initialize {graph} to an empty directed acyclic graph. -- For each {incrementalDataRecord} of {incrementalDataRecords}: - - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed - from the {pendingResults} that it completes, adding each of {pendingResults} - to {graph} as new nodes, if necessary, each directed from its {parent}, if - defined, recursively adding each {parent} as necessary until - {incrementalDataRecord} is connected to {graph}. +- Let {graph} be the result of {BuildGraph(incrementalDataRecords)}. - Let {pendingResults} be the result of {GetNonEmptyNewPending(graph)}. -- Prune root nodes from {graph} not in {pendingResults}, repeating as necessary - until all root nodes in {graph} are also in {pendingResults}. -- Yield the result of {GetInitialResult(data, errors, pending)}. +- Update {graph} to the subgraph rooted at nodes in {pendingResults}. +- Yield the result of {GetInitialResult(data, errors, pendingResults)}. - For each completed child Pending Incremental Data node of a root node in {graph}: - Let {incrementalDataRecord} be the Pending Incremental Data for that node; @@ -405,13 +398,24 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Let {newPendingResults} be a new set containing the result of {GetNonEmptyNewPending(graph, pendingResults)}. - Add all nodes in {newPendingResults} to {pendingResults}. - - Prune root nodes from {graph} not in {pendingResults}, repeating as - necessary until all root nodes in {graph} are also in {pendingResults}. + - Update {graph} to the subgraph rooted at nodes in {pendingResults}. - Let {pending} be the result of {GetPendingEntry(newPendingResults)}. - Yield the result of {GetIncrementalResult(graph, incremental, completed, pending)}. - Complete this incremental result stream. +BuildGraph(incrementalDataRecords): + +- Initialize {graph} to an empty directed acyclic graph, where the root nodes + represent the Subsequent Result nodes that have been released as pending. +- For each {incrementalDataRecord} of {incrementalDataRecords}: + - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed + from the {pendingResults} that it completes, adding each of {pendingResults} + to {graph} as new nodes, if necessary, each directed from its {parent}, if + defined, recursively adding each {parent} as necessary until + {incrementalDataRecord} is connected to {graph}. +- Return {graph}. + GetNonEmptyNewPending(graph, oldPendingResults): - If not provided, initialize {oldPendingResults} to the empty set. From a94d7c0cfb1ecd04207237e3f02901a7ce03f487 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:09:52 +0300 Subject: [PATCH 09/39] fix timing --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 76edeeebb..89ae99ced 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -407,7 +407,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): BuildGraph(incrementalDataRecords): - Initialize {graph} to an empty directed acyclic graph, where the root nodes - represent the Subsequent Result nodes that have been released as pending. + represent the pending Subsequent Results. - For each {incrementalDataRecord} of {incrementalDataRecords}: - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed from the {pendingResults} that it completes, adding each of {pendingResults} From acc720b702661a71c957943cf5573cd8a4f25637 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:16:58 +0300 Subject: [PATCH 10/39] reuse function --- spec/Section 6 -- Execution.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 89ae99ced..e74b7eb17 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -376,8 +376,8 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Continue to the next completed Pending Incremental Data node. - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. - - Add each {incrementalDataRecord} of {incrementalDataRecords} on {result} to - {graph} via the same procedure as above. + - Let {resultIncrementalDataRecords} be {incrementalDataRecords} on {result}. + - Update {graph} to {BuildGraph(resultIncrementalDataRecords, graph)}. - Let {completedDeferredFragments} be the set of root nodes in {graph} without any child Pending Data nodes. - Let {completedIncrementalDataNodes} be the set of completed Incremental Data @@ -404,17 +404,17 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): pending)}. - Complete this incremental result stream. -BuildGraph(incrementalDataRecords): +BuildGraph(incrementalDataRecords, graph): -- Initialize {graph} to an empty directed acyclic graph, where the root nodes - represent the pending Subsequent Results. +- Let {newGraph} be a new directed acyclic graph containing all of the nodes and + edges in {graph}. - For each {incrementalDataRecord} of {incrementalDataRecords}: - - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed - from the {pendingResults} that it completes, adding each of {pendingResults} - to {graph} as new nodes, if necessary, each directed from its {parent}, if - defined, recursively adding each {parent} as necessary until - {incrementalDataRecord} is connected to {graph}. -- Return {graph}. + - Add {incrementalDataRecord} to {newGraph} as a new Pending Data node + directed from the {pendingResults} that it completes, adding each of + {pendingResults} to {newGraph} as new nodes, if necessary, each directed + from its {parent}, if defined, recursively adding each {parent} as necessary + until {incrementalDataRecord} is connected to {newGraph}. +- Return {newGraph}. GetNonEmptyNewPending(graph, oldPendingResults): From 6d3fccb32c457060bbf83af3c9bae6551fe43bac Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:21:19 +0300 Subject: [PATCH 11/39] fix --- spec/Section 6 -- Execution.md | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index e74b7eb17..500c9b7a8 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -395,8 +395,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Append {GetCompletedEntry(pendingResult)} to {completed}. - Remove {pendingResult} from {graph}, promoting its child Deferred Fragment nodes to root nodes. - - Let {newPendingResults} be a new set containing the result of - {GetNonEmptyNewPending(graph, pendingResults)}. + - Let {newPendingResults} be the result of {GetNonEmptyNewPending(graph)}. - Add all nodes in {newPendingResults} to {pendingResults}. - Update {graph} to the subgraph rooted at nodes in {pendingResults}. - Let {pending} be the result of {GetPendingEntry(newPendingResults)}. @@ -416,18 +415,17 @@ BuildGraph(incrementalDataRecords, graph): until {incrementalDataRecord} is connected to {newGraph}. - Return {newGraph}. -GetNonEmptyNewPending(graph, oldPendingResults): +GetNonEmptyNewPending(graph): -- If not provided, initialize {oldPendingResults} to the empty set. -- Let {rootNodes} be the set of root nodes in {graph}. +- Initialize {newPendingResults} to the empty set. +- Initialize {rootNodes} to the set of root nodes in {graph}. - For each {rootNode} of {rootNodes}: - - If {rootNodes} is in {oldPendingResults}: - - Continue to the next {rootNode}. - If {rootNode} has no children Pending Incremental Data nodes: - Let {children} be the set of child Deferred Fragment nodes of {rootNode}. - - Remove {rootNode} from {rootNodes}. - Add each of the nodes in {children} to {rootNodes}. -- Return {rootNodes}. + - Continue to the next {rootNode} of {rootNodes}. + - Add {rootNode} to {newPendingResults}. +- Return {newPendingResults}. GetInitialResult(data, errors, pendingResults): From 9237ed2f028f915da0ea3b44226c7baa066e10cb Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:23:15 +0300 Subject: [PATCH 12/39] rename BuildGraph to GraphFromRecords --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 500c9b7a8..14a71afd0 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -354,7 +354,7 @@ The procedure for yielding incremental results is specified by the YieldIncrementalResults(data, errors, incrementalDataRecords): -- Let {graph} be the result of {BuildGraph(incrementalDataRecords)}. +- Let {graph} be the result of {GraphFromRecords(incrementalDataRecords)}. - Let {pendingResults} be the result of {GetNonEmptyNewPending(graph)}. - Update {graph} to the subgraph rooted at nodes in {pendingResults}. - Yield the result of {GetInitialResult(data, errors, pendingResults)}. @@ -377,7 +377,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. - Let {resultIncrementalDataRecords} be {incrementalDataRecords} on {result}. - - Update {graph} to {BuildGraph(resultIncrementalDataRecords, graph)}. + - Update {graph} to {GraphFromRecords(resultIncrementalDataRecords, graph)}. - Let {completedDeferredFragments} be the set of root nodes in {graph} without any child Pending Data nodes. - Let {completedIncrementalDataNodes} be the set of completed Incremental Data @@ -403,7 +403,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): pending)}. - Complete this incremental result stream. -BuildGraph(incrementalDataRecords, graph): +GraphFromRecords(incrementalDataRecords, graph): - Let {newGraph} be a new directed acyclic graph containing all of the nodes and edges in {graph}. From 76b73855496c9c1cd0a4b42e3cbcb062a6cfb4e4 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:25:31 +0300 Subject: [PATCH 13/39] reword recursive abort case --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 14a71afd0..c69553de4 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -410,9 +410,9 @@ GraphFromRecords(incrementalDataRecords, graph): - For each {incrementalDataRecord} of {incrementalDataRecords}: - Add {incrementalDataRecord} to {newGraph} as a new Pending Data node directed from the {pendingResults} that it completes, adding each of - {pendingResults} to {newGraph} as new nodes, if necessary, each directed - from its {parent}, if defined, recursively adding each {parent} as necessary - until {incrementalDataRecord} is connected to {newGraph}. + {pendingResults} to {newGraph} as a new node directed from its {parent}, + recursively adding each {parent} until {incrementalDataRecord} is connected + to {newGraph}, or the {parent} is not defined. - Return {newGraph}. GetNonEmptyNewPending(graph): From f3b6a3f84100db71e6fa93f9d0ea8eb7792531f1 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 17 Jul 2024 22:51:33 +0300 Subject: [PATCH 14/39] bring BuildFieldPlan in line with implementation --- spec/Section 6 -- Execution.md | 55 +++++++++++++++++++++------------- 1 file changed, 34 insertions(+), 21 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index c69553de4..d78d40ec3 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -630,27 +630,41 @@ directives may be applied in either order since they apply commutatively. BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): - If {parentDeferUsages} is not provided, initialize it to the empty set. -- Initialize {fieldPlan} to an empty ordered map. +- Initialize {groupedFieldSet} to an empty ordered map. +- Initialize {newGroupedFieldSets} to an empty unordered map. +- Let {fieldPlan} be an unordered map containing {groupedFieldSet} and + {newGroupedFieldSets}. - For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: - - Let {deferUsageSet} be the result of - {GetDeferUsageSet(groupForResponseKey)}. - - Let {groupedFieldSet} be the entry in {fieldPlan} for any equivalent set to - {deferUsageSet}; if no such map exists, create it as an empty ordered map. - - Set the entry for {responseKey} in {groupedFieldSet} to - {groupForResponseKey}. + - Let {filteredDeferUsageSet} be the result of + {GetFilteredDeferUsageSet(groupForResponseKey)}. + - If {filteredDeferUsageSet} is the equivalent set to {parentDeferUsages}: + - Set the entry for {responseKey} in {groupedFieldSet} to + {groupForResponseKey}. + - Otherwise: + - Let {newGroupedFieldSet} be the entry in {newGroupedFieldSets} for any + equivalent set to {deferUsageSet}; if no such map exists, create it as an + empty ordered map. + - Set the entry for {responseKey} in {newGroupedFieldSet} to + {groupForResponseKey}. - Return {fieldPlan}. -GetDeferUsageSet(fieldDetailsList): - -- Let {deferUsageSet} be the set containing the {deferUsage} entry from each - item in {fieldDetailsList}. -- For each {deferUsage} of {deferUsageSet}: - - Let {ancestors} be the set of {deferUsage} entries that are ancestors of - {deferUsage}, collected by recursively following the {parent} entry on - {deferUsage}. - - If any of {ancestors} is contained by {deferUsageSet}, remove {deferUsage} - from {deferUsageSet}. -- Return {deferUsageSet}. +GetFilteredDeferUsageSet(fieldGroup): + +- Initialize {filteredDeferUsageSet} to the empty set. +- For each {fieldDetails} of {fieldGroup}: + - Let {deferUsage} be the corresponding entry on {fieldDetails}. + - If {deferUsage} is not defined: + - Remove all entries from {filteredDeferUsageSet}. + - Return {filteredDeferUsageSet}. + - Add {deferUsage} to {filteredDeferUsageSet}. +- For each {deferUsage} in {filteredDeferUsageSet}: + - Let {parentDeferUsage} be the corresponding entry on {deferUsage}. + - While {parentDeferUsage} is defined: + - If {parentDeferUsage} is contained by {filteredDeferUsageSet}: + - Remove {deferUsage} from {filteredDeferUsageSet}. + - Continue to the next {deferUsage} in {filteredDeferUsageSet}. + - Reset {parentDeferUsage} to the corresponding entry on {parentDeferUsage}. +- Return {filteredDeferUsageSet}. ## Executing a Field Plan @@ -664,9 +678,8 @@ variableValues, serial, path, deferUsageSet, deferMap): - If {path} is not provided, initialize it to an empty list. - Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, deferMap)}. -- Let {groupedFieldSet} be the entry in {fieldPlan} for the set equivalent to - {deferUsageSet}. -- Let {newGroupedFieldSets} be the remaining portion of {fieldPlan}. +- Let {groupedFieldSet} and {newGroupedFieldSets} be the corresponding entries + on {fieldPlan}. - Allowing for parallelization, perform the following steps: - Let {data} and {nestedIncrementalDataRecords} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, From 319456653cfb18a2c2069b93cd0961a9f9020c48 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 17 Jul 2024 23:01:18 +0300 Subject: [PATCH 15/39] rename "deferred grouped field set record" to "execution group" --- spec/Section 6 -- Execution.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index d78d40ec3..8754e1385 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -364,7 +364,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): let {result} be the corresponding completed result. - If {data} on {result} is {null}: - Initialize {completed} to an empty list. - - Let {parents} be the parent nodes of {deferredGroupedFieldSetRecord}. + - Let {parents} be the parent nodes of {executionGroup}. - Initialize {completed} to an empty list. - For each {pendingResult} of {parents}: - Append {GetCompletedEntry(parent, errors)} to {completed}. @@ -686,7 +686,7 @@ variableValues, serial, path, deferUsageSet, deferMap): variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {incrementalDataRecords} be the result of - {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, + {ExecuteExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, newDeferMap)}. - Append all items in {nestedIncrementalDataRecords} to {incrementalDataRecords}. @@ -705,7 +705,7 @@ GetNewDeferMap(newDeferUsages, path, deferMap): - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. - Return {newDeferMap}. -ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, +ExecuteExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, deferMap): - Initialize {incrementalDataRecords} to an empty list. @@ -715,7 +715,7 @@ newGroupedFieldSets, path, deferMap): - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. - Append {deferredFragment} to {deferredFragments}. - Let {incrementalDataRecord} represent the future execution of - {ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, + {ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, deferredFragments, path, deferUsageSet, deferMap)}, incrementally completing {deferredFragments} at {path}. - Append {incrementalDataRecord} to {incrementalDataRecords}. @@ -726,8 +726,8 @@ newGroupedFieldSets, path, deferMap): Note: {incrementalDataRecord} can be safely initiated without blocking higher-priority data once any of {deferredFragments} are released as pending. -ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, -variableValues, path, deferUsageSet, deferMap): +ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, +path, deferUsageSet, deferMap): - Let {data} and {incrementalDataRecords} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, From ab8ccc3865fac4a7916601df8acca1d0d93d7d9e Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 17 Jul 2024 23:02:43 +0300 Subject: [PATCH 16/39] rename ExecuteExecutionGroup to CollectExecutionGroup --- spec/Section 6 -- Execution.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 8754e1385..31e52768f 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -686,7 +686,7 @@ variableValues, serial, path, deferUsageSet, deferMap): variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {incrementalDataRecords} be the result of - {ExecuteExecutionGroups(objectType, objectValue, variableValues, + {CollectExecutionGroup(objectType, objectValue, variableValues, newGroupedFieldSets, path, newDeferMap)}. - Append all items in {nestedIncrementalDataRecords} to {incrementalDataRecords}. @@ -705,7 +705,7 @@ GetNewDeferMap(newDeferUsages, path, deferMap): - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. - Return {newDeferMap}. -ExecuteExecutionGroups(objectType, objectValue, variableValues, +CollectExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, deferMap): - Initialize {incrementalDataRecords} to an empty list. @@ -715,7 +715,7 @@ newGroupedFieldSets, path, deferMap): - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. - Append {deferredFragment} to {deferredFragments}. - Let {incrementalDataRecord} represent the future execution of - {ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, + {CollectExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, deferredFragments, path, deferUsageSet, deferMap)}, incrementally completing {deferredFragments} at {path}. - Append {incrementalDataRecord} to {incrementalDataRecords}. @@ -726,7 +726,7 @@ newGroupedFieldSets, path, deferMap): Note: {incrementalDataRecord} can be safely initiated without blocking higher-priority data once any of {deferredFragments} are released as pending. -ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, +CollectExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap): - Let {data} and {incrementalDataRecords} be the result of running From 14b107f319ed9a561ee1a401da00762a80e9c46d Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 17:26:26 +0300 Subject: [PATCH 17/39] properly initialize deferUsages with their parents --- spec/Section 6 -- Execution.md | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 31e52768f..3612dccd9 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -570,8 +570,11 @@ visitedFragments): - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - If {deferDirective} is defined, let {fragmentDeferUsage} be - {deferDirective} and append it to {newDeferUsages}. + - If {deferDirective} is defined: + - Let {path} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {path} and + {parentDeferUsage}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result of calling {CollectFields(objectType, fragmentSelectionSet, @@ -595,8 +598,11 @@ visitedFragments): - Let {deferDirective} be that directive. - If this execution is for a subscription operation, raise a _field error_. - - If {deferDirective} is defined, let {fragmentDeferUsage} be - {deferDirective} and append it to {newDeferUsages}. + - If {deferDirective} is defined: + - Let {path} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {path} and + {parentDeferUsage}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result of calling {CollectFields(objectType, fragmentSelectionSet, From 1645856e2370a123e3c8b09226e2d4d0848eacae Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 17:27:41 +0300 Subject: [PATCH 18/39] move Field Collection back to where it was mostly to reduce the diff. --- spec/Section 6 -- Execution.md | 358 ++++++++++++++++----------------- 1 file changed, 179 insertions(+), 179 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 3612dccd9..aea2ea33a 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -493,185 +493,6 @@ BatchIncrementalResults(incrementalResults): of {hasNext} on the final item in the list. - Yield {batchedIncrementalResult}. -### Field Collection - -Before execution, the _selection set_ is converted to a grouped field set by -calling {CollectFields()}. Each entry in the grouped field set is a list of -fields that share a response key (the alias if defined, otherwise the field -name). This ensures all fields with the same response key (including those in -referenced fragments) are executed at the same time. - -As an example, collecting the fields of this selection set would collect two -instances of the field `a` and one of field `b`: - -```graphql example -{ - a { - subfield1 - } - ...ExampleFragment -} - -fragment ExampleFragment on Query { - a { - subfield2 - } - b -} -``` - -The depth-first-search order of the field groups produced by {CollectFields()} -is maintained through execution, ensuring that fields appear in the executed -response in a stable and predictable order. - -CollectFields(objectType, selectionSet, variableValues, deferUsage, -visitedFragments): - -- If {visitedFragments} is not provided, initialize it to the empty set. -- Initialize {groupedFields} to an empty ordered map of lists. -- Initialize {newDeferUsages} to an empty list. -- For each {selection} in {selectionSet}: - - If {selection} provides the directive `@skip`, let {skipDirective} be that - directive. - - If {skipDirective}'s {if} argument is {true} or is a variable in - {variableValues} with the value {true}, continue with the next {selection} - in {selectionSet}. - - If {selection} provides the directive `@include`, let {includeDirective} be - that directive. - - If {includeDirective}'s {if} argument is not {true} and is not a variable - in {variableValues} with the value {true}, continue with the next - {selection} in {selectionSet}. - - If {selection} is a {Field}: - - Let {responseKey} be the response key of {selection} (the alias if - defined, otherwise the field name). - - Let {fieldDetails} be a new unordered map containing {deferUsage}. - - Set the entry for {field} on {fieldDetails} to {selection}. and - {deferUsage}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append {fieldDetails} to the {groupForResponseKey}. - - If {selection} is a {FragmentSpread}: - - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} provides the directive `@defer` and its {if} - argument is not {false} and is not a variable in {variableValues} with the - value {false}: - - Let {deferDirective} be that directive. - - If this execution is for a subscription operation, raise a _field - error_. - - If {deferDirective} is not defined: - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. - - Let {fragment} be the Fragment in the current Document whose name is - {fragmentSpreadName}. - - If no such {fragment} exists, continue with the next {selection} in - {selectionSet}. - - Let {fragmentType} be the type condition on {fragment}. - - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue - with the next {selection} in {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - If {deferDirective} is defined: - - Let {path} be the corresponding entry on {deferDirective}. - - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {path} and - {parentDeferUsage}. - - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result - of calling {CollectFields(objectType, fragmentSelectionSet, - variableValues, fragmentDeferUsage, visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. - - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. - - If {selection} is an {InlineFragment}: - - Let {fragmentType} be the type condition on {selection}. - - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, - fragmentType)} is {false}, continue with the next {selection} in - {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - If {InlineFragment} provides the directive `@defer` and its {if} argument - is not {false} and is not a variable in {variableValues} with the value - {false}: - - Let {deferDirective} be that directive. - - If this execution is for a subscription operation, raise a _field - error_. - - If {deferDirective} is defined: - - Let {path} be the corresponding entry on {deferDirective}. - - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {path} and - {parentDeferUsage}. - - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result - of calling {CollectFields(objectType, fragmentSelectionSet, - variableValues, fragmentDeferUsage, visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. - - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. -- Return {groupedFields} and {newDeferUsages}. - -DoesFragmentTypeApply(objectType, fragmentType): - -- If {fragmentType} is an Object Type: - - If {objectType} and {fragmentType} are the same type, return {true}, - otherwise return {false}. -- If {fragmentType} is an Interface Type: - - If {objectType} is an implementation of {fragmentType}, return {true} - otherwise return {false}. -- If {fragmentType} is a Union: - - If {objectType} is a possible type of {fragmentType}, return {true} - otherwise return {false}. - -Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` -directives may be applied in either order since they apply commutatively. - -### Field Plan Generation - -BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): - -- If {parentDeferUsages} is not provided, initialize it to the empty set. -- Initialize {groupedFieldSet} to an empty ordered map. -- Initialize {newGroupedFieldSets} to an empty unordered map. -- Let {fieldPlan} be an unordered map containing {groupedFieldSet} and - {newGroupedFieldSets}. -- For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: - - Let {filteredDeferUsageSet} be the result of - {GetFilteredDeferUsageSet(groupForResponseKey)}. - - If {filteredDeferUsageSet} is the equivalent set to {parentDeferUsages}: - - Set the entry for {responseKey} in {groupedFieldSet} to - {groupForResponseKey}. - - Otherwise: - - Let {newGroupedFieldSet} be the entry in {newGroupedFieldSets} for any - equivalent set to {deferUsageSet}; if no such map exists, create it as an - empty ordered map. - - Set the entry for {responseKey} in {newGroupedFieldSet} to - {groupForResponseKey}. -- Return {fieldPlan}. - -GetFilteredDeferUsageSet(fieldGroup): - -- Initialize {filteredDeferUsageSet} to the empty set. -- For each {fieldDetails} of {fieldGroup}: - - Let {deferUsage} be the corresponding entry on {fieldDetails}. - - If {deferUsage} is not defined: - - Remove all entries from {filteredDeferUsageSet}. - - Return {filteredDeferUsageSet}. - - Add {deferUsage} to {filteredDeferUsageSet}. -- For each {deferUsage} in {filteredDeferUsageSet}: - - Let {parentDeferUsage} be the corresponding entry on {deferUsage}. - - While {parentDeferUsage} is defined: - - If {parentDeferUsage} is contained by {filteredDeferUsageSet}: - - Remove {deferUsage} from {filteredDeferUsageSet}. - - Continue to the next {deferUsage} in {filteredDeferUsageSet}. - - Reset {parentDeferUsage} to the corresponding entry on {parentDeferUsage}. -- Return {filteredDeferUsageSet}. - ## Executing a Field Plan To execute a field plan, the object value being evaluated and the object type @@ -884,6 +705,185 @@ A correct executor must generate the following result for that _selection set_: } ``` +### Field Collection + +Before execution, the _selection set_ is converted to a grouped field set by +calling {CollectFields()}. Each entry in the grouped field set is a list of +fields that share a response key (the alias if defined, otherwise the field +name). This ensures all fields with the same response key (including those in +referenced fragments) are executed at the same time. + +As an example, collecting the fields of this selection set would collect two +instances of the field `a` and one of field `b`: + +```graphql example +{ + a { + subfield1 + } + ...ExampleFragment +} + +fragment ExampleFragment on Query { + a { + subfield2 + } + b +} +``` + +The depth-first-search order of the field groups produced by {CollectFields()} +is maintained through execution, ensuring that fields appear in the executed +response in a stable and predictable order. + +CollectFields(objectType, selectionSet, variableValues, deferUsage, +visitedFragments): + +- If {visitedFragments} is not provided, initialize it to the empty set. +- Initialize {groupedFields} to an empty ordered map of lists. +- Initialize {newDeferUsages} to an empty list. +- For each {selection} in {selectionSet}: + - If {selection} provides the directive `@skip`, let {skipDirective} be that + directive. + - If {skipDirective}'s {if} argument is {true} or is a variable in + {variableValues} with the value {true}, continue with the next {selection} + in {selectionSet}. + - If {selection} provides the directive `@include`, let {includeDirective} be + that directive. + - If {includeDirective}'s {if} argument is not {true} and is not a variable + in {variableValues} with the value {true}, continue with the next + {selection} in {selectionSet}. + - If {selection} is a {Field}: + - Let {responseKey} be the response key of {selection} (the alias if + defined, otherwise the field name). + - Let {fieldDetails} be a new unordered map containing {deferUsage}. + - Set the entry for {field} on {fieldDetails} to {selection}. and + {deferUsage}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append {fieldDetails} to the {groupForResponseKey}. + - If {selection} is a {FragmentSpread}: + - Let {fragmentSpreadName} be the name of {selection}. + - If {fragmentSpreadName} provides the directive `@defer` and its {if} + argument is not {false} and is not a variable in {variableValues} with the + value {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is not defined: + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. + - Let {fragment} be the Fragment in the current Document whose name is + {fragmentSpreadName}. + - If no such {fragment} exists, continue with the next {selection} in + {selectionSet}. + - Let {fragmentType} be the type condition on {fragment}. + - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue + with the next {selection} in {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. + - If {deferDirective} is defined: + - Let {path} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {path} and + {parentDeferUsage}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. + - If {selection} is an {InlineFragment}: + - Let {fragmentType} be the type condition on {selection}. + - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, + fragmentType)} is {false}, continue with the next {selection} in + {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {selection}. + - If {InlineFragment} provides the directive `@defer` and its {if} argument + is not {false} and is not a variable in {variableValues} with the value + {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is defined: + - Let {path} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {path} and + {parentDeferUsage}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. +- Return {groupedFields} and {newDeferUsages}. + +DoesFragmentTypeApply(objectType, fragmentType): + +- If {fragmentType} is an Object Type: + - If {objectType} and {fragmentType} are the same type, return {true}, + otherwise return {false}. +- If {fragmentType} is an Interface Type: + - If {objectType} is an implementation of {fragmentType}, return {true} + otherwise return {false}. +- If {fragmentType} is a Union: + - If {objectType} is a possible type of {fragmentType}, return {true} + otherwise return {false}. + +Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` +directives may be applied in either order since they apply commutatively. + +### Field Plan Generation + +BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): + +- If {parentDeferUsages} is not provided, initialize it to the empty set. +- Initialize {groupedFieldSet} to an empty ordered map. +- Initialize {newGroupedFieldSets} to an empty unordered map. +- Let {fieldPlan} be an unordered map containing {groupedFieldSet} and + {newGroupedFieldSets}. +- For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: + - Let {filteredDeferUsageSet} be the result of + {GetFilteredDeferUsageSet(groupForResponseKey)}. + - If {filteredDeferUsageSet} is the equivalent set to {parentDeferUsages}: + - Set the entry for {responseKey} in {groupedFieldSet} to + {groupForResponseKey}. + - Otherwise: + - Let {newGroupedFieldSet} be the entry in {newGroupedFieldSets} for any + equivalent set to {deferUsageSet}; if no such map exists, create it as an + empty ordered map. + - Set the entry for {responseKey} in {newGroupedFieldSet} to + {groupForResponseKey}. +- Return {fieldPlan}. + +GetFilteredDeferUsageSet(fieldGroup): + +- Initialize {filteredDeferUsageSet} to the empty set. +- For each {fieldDetails} of {fieldGroup}: + - Let {deferUsage} be the corresponding entry on {fieldDetails}. + - If {deferUsage} is not defined: + - Remove all entries from {filteredDeferUsageSet}. + - Return {filteredDeferUsageSet}. + - Add {deferUsage} to {filteredDeferUsageSet}. +- For each {deferUsage} in {filteredDeferUsageSet}: + - Let {parentDeferUsage} be the corresponding entry on {deferUsage}. + - While {parentDeferUsage} is defined: + - If {parentDeferUsage} is contained by {filteredDeferUsageSet}: + - Remove {deferUsage} from {filteredDeferUsageSet}. + - Continue to the next {deferUsage} in {filteredDeferUsageSet}. + - Reset {parentDeferUsage} to the corresponding entry on {parentDeferUsage}. +- Return {filteredDeferUsageSet}. + ## Executing Fields Each field requested in the grouped field set that is defined on the selected From 76c2a987676eabf497eb7dd3ffbf49174069c1a8 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 17:28:37 +0300 Subject: [PATCH 19/39] f --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index aea2ea33a..65782a4b3 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -593,7 +593,7 @@ path, deferUsageSet, deferMap): - Return {resultMap} and {incrementalDataRecords}. Note: {resultMap} is ordered by which fields appear first in the operation. This -is explained in greater detail in the Field Collection section above. +is explained in greater detail in the Field Collection section below. **Errors and Non-Null Fields** From 81a935346b861e580d69ec775b717e73f5e8eb8d Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 17:30:44 +0300 Subject: [PATCH 20/39] use fieldDetailsList consistently instead of sometimes fieldGroup, for consistency and so as to remove another "Group" term --- spec/Section 6 -- Execution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 65782a4b3..55f0bd244 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -866,10 +866,10 @@ BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): {groupForResponseKey}. - Return {fieldPlan}. -GetFilteredDeferUsageSet(fieldGroup): +GetFilteredDeferUsageSet(fieldDetailsList): - Initialize {filteredDeferUsageSet} to the empty set. -- For each {fieldDetails} of {fieldGroup}: +- For each {fieldDetails} of {fieldDetailsList}: - Let {deferUsage} be the corresponding entry on {fieldDetails}. - If {deferUsage} is not defined: - Remove all entries from {filteredDeferUsageSet}. From 74bfb431a0c90e4b368f5b631ea788211aca66eb Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 23:12:42 +0300 Subject: [PATCH 21/39] add info re: data structures --- spec/Section 6 -- Execution.md | 38 ++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 55f0bd244..95bde52b7 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -736,6 +736,40 @@ The depth-first-search order of the field groups produced by {CollectFields()} is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. +The {CollectFields()} algorithm makes use of the following data types: + +Defer Usage Records are unordered maps representing the usage of a `@defer` +directive within a given operation. Defer Usages are "abstract" in that they +include information about the `@defer` directive from the AST of the GraphQL +document. A single Defer Usage may be used to create many "concrete" Delivery +Groups when a `@defer` is included within a list type. + +Defer Usages contain the following information: + +- {label}: the `label` argument provided by the given `@defer` directive, if + any, otherwise {undefined}. +- {parentDeferUsage}: a Defer Usage corresponding to the `@defer` directive + enclosing this `@defer` directive, if any, otherwise {undefined}. + +The {parentDeferUsage} entry is used to build distinct Execution Groups as +discussed within the Field Plan Generation section below. + +Field Details Records are unordered maps containing the following entries: + +- {field}: the Field selection. +- {deferUsage}: the Defer Usage enclosing the selection, if any, otherwise + {undefined}. + +A Grouped Field Set is an ordered map of keys to lists of Field Details. The +keys are the same as that of the response, the alias for the field, if defined, +otherwise the field name. + +The {CollectFields()} algorithm returns: + +- {groupedFieldSet}: the Grouped Field Set for the fields in the selection set. +- {newDeferUsages}: a list of new Defer Usages encountered during this field + collection. + CollectFields(objectType, selectionSet, variableValues, deferUsage, visitedFragments): @@ -843,6 +877,10 @@ DoesFragmentTypeApply(objectType, fragmentType): Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` directives may be applied in either order since they apply commutatively. +Note: When completing a List field, the {CollectFields} algorithm is invoked +with the same arguments for each element of the list. GraphQL Services may +choose to memoize their implementations of {CollectFields}. + ### Field Plan Generation BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): From 215d4f3316ad727e292dece46a967436fc425da2 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Sat, 20 Jul 2024 21:43:11 +0300 Subject: [PATCH 22/39] rename FieldPlan to ExecutionPlan --- spec/Section 6 -- Execution.md | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 95bde52b7..e61efc4ec 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -334,9 +334,9 @@ serial): - If {serial} is not provided, initialize it to {false}. - Let {groupedFieldSet} and {newDeferUsages} be the result of {CollectFields(objectType, selectionSet, variableValues)}. -- Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet)}. +- Let {executionPlan} be the result of {BuildExecutionPlan(groupedFieldSet)}. - Let {data} and {incrementalDataRecords} be the result of - {ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, initialValue, + {ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, initialValue, variableValues, serial)}. - Let {errors} be the list of all _field error_ raised while completing {data}. - If {incrementalDataRecords} is empty, return an unordered map containing @@ -493,20 +493,20 @@ BatchIncrementalResults(incrementalResults): of {hasNext} on the final item in the list. - Yield {batchedIncrementalResult}. -## Executing a Field Plan +## Executing an Execution Plan -To execute a field plan, the object value being evaluated and the object type -need to be known, as well as whether the non-deferred grouped field set must be -executed serially, or may be executed in parallel. +To execute a execution plan, the object value being evaluated and the object +type need to be known, as well as whether the non-deferred grouped field set +must be executed serially, or may be executed in parallel. -ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, objectValue, +ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, objectValue, variableValues, serial, path, deferUsageSet, deferMap): - If {path} is not provided, initialize it to an empty list. - Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, deferMap)}. - Let {groupedFieldSet} and {newGroupedFieldSets} be the corresponding entries - on {fieldPlan}. + on {executionPlan}. - Allowing for parallelization, perform the following steps: - Let {data} and {nestedIncrementalDataRecords} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, @@ -752,7 +752,7 @@ Defer Usages contain the following information: enclosing this `@defer` directive, if any, otherwise {undefined}. The {parentDeferUsage} entry is used to build distinct Execution Groups as -discussed within the Field Plan Generation section below. +discussed within the Execution Plan Generation section below. Field Details Records are unordered maps containing the following entries: @@ -881,14 +881,14 @@ Note: When completing a List field, the {CollectFields} algorithm is invoked with the same arguments for each element of the list. GraphQL Services may choose to memoize their implementations of {CollectFields}. -### Field Plan Generation +### Execution Plan Generation -BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): +BuildExecutionPlan(originalGroupedFieldSet, parentDeferUsages): - If {parentDeferUsages} is not provided, initialize it to the empty set. - Initialize {groupedFieldSet} to an empty ordered map. - Initialize {newGroupedFieldSets} to an empty unordered map. -- Let {fieldPlan} be an unordered map containing {groupedFieldSet} and +- Let {executionPlan} be an unordered map containing {groupedFieldSet} and {newGroupedFieldSets}. - For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: - Let {filteredDeferUsageSet} be the result of @@ -902,7 +902,7 @@ BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): empty ordered map. - Set the entry for {responseKey} in {newGroupedFieldSet} to {groupForResponseKey}. -- Return {fieldPlan}. +- Return {executionPlan}. GetFilteredDeferUsageSet(fieldDetailsList): @@ -1054,9 +1054,9 @@ deferUsageSet, deferMap): - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - Let {groupedFieldSet} and {newDeferUsages} be the result of calling {CollectSubfields(objectType, fieldDetailsList, variableValues)}. - - Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet, + - Let {executionPlan} be the result of {BuildExecutionPlan(groupedFieldSet, deferUsageSet)}. - - Return the result of {ExecuteFieldPlan(newDeferUsages, fieldPlan, + - Return the result of {ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, result, variableValues, false, path, deferUsageSet, deferMap)}. CompleteListValue(innerType, fieldDetailsList, result, variableValues, path, From 375804cdf1403bb9fc3a287266452399790d87f3 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 24 Jul 2024 20:31:30 +0300 Subject: [PATCH 23/39] path => label --- spec/Section 6 -- Execution.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index e61efc4ec..6d1464755 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -817,9 +817,9 @@ visitedFragments): with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - If {deferDirective} is defined: - - Let {path} be the corresponding entry on {deferDirective}. + - Let {label} be the corresponding entry on {deferDirective}. - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {path} and + - Let {fragmentDeferUsage} be an unordered map containing {label} and {parentDeferUsage}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result @@ -845,9 +845,9 @@ visitedFragments): - If this execution is for a subscription operation, raise a _field error_. - If {deferDirective} is defined: - - Let {path} be the corresponding entry on {deferDirective}. + - Let {label} be the corresponding entry on {deferDirective}. - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {path} and + - Let {fragmentDeferUsage} be an unordered map containing {label} and {parentDeferUsage}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result From 9ad6065ffe9a2c31ef16758cee59f2eb7052ac3e Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 25 Jul 2024 19:46:19 +0300 Subject: [PATCH 24/39] add missing arguments --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 6d1464755..85757a57b 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -586,7 +586,7 @@ path, deferUsageSet, deferMap): - If {fieldType} is defined: - Let {responseValue} and {fieldIncrementalDataRecords} be the result of {ExecuteField(objectType, objectValue, fieldType, fields, variableValues, - path)}. + path, deferUsageSet, deferMap)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. - Append all items in {fieldIncrementalDataRecords} to {incrementalDataRecords}. From 895428c165f8551f1723f09c8ad85027488a9c2b Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 25 Jul 2024 19:50:45 +0300 Subject: [PATCH 25/39] add missing return value --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 85757a57b..060ea155a 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -1154,7 +1154,7 @@ CollectSubfields(objectType, fieldDetailsList, variableValues): {responseKey}; if no such list exists, create it as an empty list. - Append all fields in {subfields} to {groupForResponseKey}. - Append all defer usages in {subNewDeferUsages} to {newDeferUsages}. -- Return {groupedFieldSet}. +- Return {groupedFieldSet} and {newDeferUsages}. ### Handling Field Errors From 255ef415bc5b51b99522a1ebc6bd22d615777b9e Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 25 Jul 2024 20:16:26 +0300 Subject: [PATCH 26/39] fix some renaming around CollectExecutionGroups and ExecuteExecutionGroup --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 060ea155a..82bddc9da 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -513,7 +513,7 @@ variableValues, serial, path, deferUsageSet, deferMap): variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {incrementalDataRecords} be the result of - {CollectExecutionGroup(objectType, objectValue, variableValues, + {CollectExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, newDeferMap)}. - Append all items in {nestedIncrementalDataRecords} to {incrementalDataRecords}. @@ -542,7 +542,7 @@ newGroupedFieldSets, path, deferMap): - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. - Append {deferredFragment} to {deferredFragments}. - Let {incrementalDataRecord} represent the future execution of - {CollectExecutionGroup(groupedFieldSet, objectType, objectValue, + {ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, deferredFragments, path, deferUsageSet, deferMap)}, incrementally completing {deferredFragments} at {path}. - Append {incrementalDataRecord} to {incrementalDataRecords}. @@ -553,7 +553,7 @@ newGroupedFieldSets, path, deferMap): Note: {incrementalDataRecord} can be safely initiated without blocking higher-priority data once any of {deferredFragments} are released as pending. -CollectExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, +ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap): - Let {data} and {incrementalDataRecords} be the result of running From 4bd72b4057a4052723f21c8a6490839e1386d940 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Mon, 26 Aug 2024 09:02:46 -0400 Subject: [PATCH 27/39] Correct argument name "node" should be "field" within CreateSourceEventStream Co-authored-by: Rob Richard --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 82bddc9da..0f6a6671c 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -261,7 +261,7 @@ CreateSourceEventStream(subscription, schema, variableValues, initialValue): - Let {fieldName} be the name of {field}. Note: This value is unaffected if an alias is used. - Let {argumentValues} be the result of {CoerceArgumentValues(subscriptionType, - node, variableValues)}. + field, variableValues)}. - Let {fieldStream} be the result of running {ResolveFieldEventStream(subscriptionType, initialValue, fieldName, argumentValues)}. From 7c0088e45c0849b66df9d9f021935d75b97d784d Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Mon, 26 Aug 2024 16:07:29 +0300 Subject: [PATCH 28/39] clarify errors from ExecuteExecutionPlan accompanying note is a WIP, open to further suggestions as to how to clarify --- spec/Section 6 -- Execution.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 0f6a6671c..a97f00eeb 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -338,7 +338,8 @@ serial): - Let {data} and {incrementalDataRecords} be the result of {ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, initialValue, variableValues, serial)}. -- Let {errors} be the list of all _field error_ raised while completing {data}. +- Let {errors} be the list of all _field error_ raised while executing the + execution plan. - If {incrementalDataRecords} is empty, return an unordered map containing {data} and {errors}. - Let {incrementalResults} be the result of {YieldIncrementalResults(data, @@ -347,6 +348,9 @@ serial): - Let {initialResult} be that result. - Return {initialResult} and {BatchIncrementalResults(incrementalResults)}. +Note: {ExecuteExecutionPlan()} does not directly raise field errors from the +incremental portion of the Execution Plan. + ### Yielding Incremental Results The procedure for yielding incremental results is specified by the From 6cc43d130c25170566eacdbc7416bd0dc8956450 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Mon, 26 Aug 2024 16:36:42 +0300 Subject: [PATCH 29/39] add initial versions of explanations for the algorithms in the "Executing an Execution Plan" section --- spec/Section 6 -- Execution.md | 38 ++++++++++++++++++++++++++++++---- 1 file changed, 34 insertions(+), 4 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index a97f00eeb..3bbd9d48a 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -499,9 +499,12 @@ BatchIncrementalResults(incrementalResults): ## Executing an Execution Plan -To execute a execution plan, the object value being evaluated and the object -type need to be known, as well as whether the non-deferred grouped field set -must be executed serially, or may be executed in parallel. +Executing an execution plan consists of two tasks that may be performed in +parallel. The first task is simply the execution of the non-deferred grouped +field set. The second task is to use the partitioned grouped field sets within +the execution plan to generate Execution Groups, i.e. Incremental Data Records, +where each Incremental Data Records represents the deferred execution of one of +the partitioned grouped field sets. ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, objectValue, variableValues, serial, path, deferUsageSet, deferMap): @@ -523,6 +526,15 @@ variableValues, serial, path, deferUsageSet, deferMap): {incrementalDataRecords}. - Return {data} and {incrementalDataRecords}. +Because `@defer` directives may be nested within list types, a map is required +to associate a Defer Usage record as recorded within Field Details Records and +an actual Deferred Fragment so that any additional Execution Groups may be +associated with the correct Deferred Fragment. The {GetNewDeferMap()} algorithm +creates that map. Given a list of new Defer Usages, the actual path at which the +fields they defer are spread, and an initial map, it returns a new map +containing all entries in the provided defer map, as well as new entries for +each new Defer Usage. + GetNewDeferMap(newDeferUsages, path, deferMap): - If {newDeferUsages} is empty, return {deferMap}: @@ -536,6 +548,11 @@ GetNewDeferMap(newDeferUsages, path, deferMap): - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. - Return {newDeferMap}. +The {CollectExecutionGroups()} algorithm is responsible for creating the +Execution Groups, i.e. Incremental Data Records, for each partitioned grouped +field set. It uses the map created by {GetNewDeferMap()} algorithm to associate +each Execution Group with the correct Deferred Fragment. + CollectExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, deferMap): @@ -557,6 +574,9 @@ newGroupedFieldSets, path, deferMap): Note: {incrementalDataRecord} can be safely initiated without blocking higher-priority data once any of {deferredFragments} are released as pending. +The {ExecuteExecutionGroup()} algorithm is responsible for actually executing +the deferred grouped field set and collecting the result and any raised errors. + ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap): @@ -564,7 +584,8 @@ path, deferUsageSet, deferMap): {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap)} _normally_ (allowing parallelization). -- Let {errors} be the list of all _field error_ raised while completing {data}. +- Let {errors} be the list of all _field error_ raised while executing + {ExecuteGroupedFieldSet()}. - Return an unordered map containing {data}, {errors}, and {incrementalDataRecords}. @@ -887,6 +908,15 @@ choose to memoize their implementations of {CollectFields}. ### Execution Plan Generation +A grouped field set may contain fields that have been deferred by the use of the +`@defer` directive on their enclosing fragments. Given a grouped field set, +{BuildExecutionPlan()} generates an execution plan by partitioning the grouped +field as specified by the operation's use of `@defer` and the requirements of +the incremental response format. An execution plan consists of a single new +grouped field containing the fields that do not require deferral, and a map of +new grouped field set containing where the keys represent the set of Defer +Usages containing those fields. + BuildExecutionPlan(originalGroupedFieldSet, parentDeferUsages): - If {parentDeferUsages} is not provided, initialize it to the empty set. From e1e50c9dfaeeaa01433e91c8b2d6b9ad59da8c27 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 5 Sep 2024 23:22:17 +0300 Subject: [PATCH 30/39] add subheadings --- spec/Section 6 -- Execution.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 3bbd9d48a..80777849f 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -526,6 +526,8 @@ variableValues, serial, path, deferUsageSet, deferMap): {incrementalDataRecords}. - Return {data} and {incrementalDataRecords}. +### Mapping Deferred Fragments to Delivery Groups + Because `@defer` directives may be nested within list types, a map is required to associate a Defer Usage record as recorded within Field Details Records and an actual Deferred Fragment so that any additional Execution Groups may be @@ -548,6 +550,8 @@ GetNewDeferMap(newDeferUsages, path, deferMap): - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. - Return {newDeferMap}. +### Collecting Execution Groups + The {CollectExecutionGroups()} algorithm is responsible for creating the Execution Groups, i.e. Incremental Data Records, for each partitioned grouped field set. It uses the map created by {GetNewDeferMap()} algorithm to associate From 6c9d9ee6417303415b6ec4ae17192e54c56c4785 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 09:51:21 +0300 Subject: [PATCH 31/39] adjust heading --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 80777849f..aaebbcd82 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -526,7 +526,7 @@ variableValues, serial, path, deferUsageSet, deferMap): {incrementalDataRecords}. - Return {data} and {incrementalDataRecords}. -### Mapping Deferred Fragments to Delivery Groups +### Mapping @defer Directives to Delivery Groups Because `@defer` directives may be nested within list types, a map is required to associate a Defer Usage record as recorded within Field Details Records and From 29598b1f3c7d0d395649b8d4470233b22ba08904 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 09:52:54 +0300 Subject: [PATCH 32/39] Initialize graph --- spec/Section 6 -- Execution.md | 1 + 1 file changed, 1 insertion(+) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index aaebbcd82..7e5078920 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -409,6 +409,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): GraphFromRecords(incrementalDataRecords, graph): +- If {graph} is not provided, initialize to an empty graph. - Let {newGraph} be a new directed acyclic graph containing all of the nodes and edges in {graph}. - For each {incrementalDataRecord} of {incrementalDataRecords}: From 4bb4fdee8b59e18e8bb1ac2c0263dda50fa4b9f2 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 10:09:34 +0300 Subject: [PATCH 33/39] adjust YieldSubsequentResults algorithm per review --- spec/Section 6 -- Execution.md | 37 +++++++++++++++++----------------- 1 file changed, 18 insertions(+), 19 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 7e5078920..46b5dd4cb 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -359,23 +359,21 @@ The procedure for yielding incremental results is specified by the YieldIncrementalResults(data, errors, incrementalDataRecords): - Let {graph} be the result of {GraphFromRecords(incrementalDataRecords)}. -- Let {pendingResults} be the result of {GetNonEmptyNewPending(graph)}. -- Update {graph} to the subgraph rooted at nodes in {pendingResults}. +- Let {rootNodes} be the result of {GetNewRootNodes(graph)}. +- Update {graph} to the subgraph rooted at nodes in {rootNodes}. - Yield the result of {GetInitialResult(data, errors, pendingResults)}. - For each completed child Pending Incremental Data node of a root node in {graph}: - Let {incrementalDataRecord} be the Pending Incremental Data for that node; let {result} be the corresponding completed result. - If {data} on {result} is {null}: - - Initialize {completed} to an empty list. - Let {parents} be the parent nodes of {executionGroup}. - Initialize {completed} to an empty list. - - For each {pendingResult} of {parents}: + - For each {node} of {parents}: - Append {GetCompletedEntry(parent, errors)} to {completed}. - - Remove {pendingResult} and all of its descendant nodes from {graph}, - except for any descendant Incremental Data Record nodes with other - parents. - - Let {hasNext} be {false}, if {graph} is empty. + - Remove {node} and all of its descendant nodes from {graph}, except for + any descendant Incremental Data Record nodes with other parents. + - Let {hasNext} be {false} if {graph} is empty; otherwise, {true}. - Yield an unordered map containing {completed} and {hasNext}. - Continue to the next completed Pending Incremental Data node. - Replace {node} in {graph} with a new node corresponding to the Completed @@ -389,20 +387,21 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - If {completedIncrementalDataNodes} is empty, continue to the next completed Pending Incremental Data Node. - Initialize {incremental} to an empty list. - - For each {node} of {completedIncrementalDataNodes}: - - Let {incrementalDataRecord} be the corresponding record for {node}. + - For each {completedIncrementalDataNode} of {completedIncrementalDataNodes}: + - Let {incrementalDataRecord} be the corresponding record for + {completedIncrementalDataNode}. - Append {GetIncrementalEntry(incrementalDataRecord, graph)} to {incremental}. - Remove {node} from {graph}. - Initialize {completed} to an empty list. - - For each {pendingResult} of {completedDeferredFragments}: - - Append {GetCompletedEntry(pendingResult)} to {completed}. - - Remove {pendingResult} from {graph}, promoting its child Deferred Fragment - nodes to root nodes. - - Let {newPendingResults} be the result of {GetNonEmptyNewPending(graph)}. - - Add all nodes in {newPendingResults} to {pendingResults}. - - Update {graph} to the subgraph rooted at nodes in {pendingResults}. - - Let {pending} be the result of {GetPendingEntry(newPendingResults)}. + - For each {completedDeferredFragment} of {completedDeferredFragments}: + - Append {GetCompletedEntry(completedDeferredFragment)} to {completed}. + - Remove {completedDeferredFragment} from {graph}, promoting its child + Deferred Fragment nodes to root nodes. + - Let {newRootNodes} be the result of {GetNewRootNodes(graph)}. + - Add all nodes in {newRootNodes} to {rootNodes}. + - Update {graph} to the subgraph rooted at nodes in {rootNodes}. + - Let {pending} be the result of {GetPendingEntry(newRootNodes)}. - Yield the result of {GetIncrementalResult(graph, incremental, completed, pending)}. - Complete this incremental result stream. @@ -420,7 +419,7 @@ GraphFromRecords(incrementalDataRecords, graph): to {newGraph}, or the {parent} is not defined. - Return {newGraph}. -GetNonEmptyNewPending(graph): +GetNewRootNodes(graph): - Initialize {newPendingResults} to the empty set. - Initialize {rootNodes} to the set of root nodes in {graph}. From f2aec2ad6d3f0a876f2068c1a728b46c8e19a582 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 10:15:39 +0300 Subject: [PATCH 34/39] reuse GetIncrementalResult() for the error case --- spec/Section 6 -- Execution.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 46b5dd4cb..abc7924f6 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -373,8 +373,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Append {GetCompletedEntry(parent, errors)} to {completed}. - Remove {node} and all of its descendant nodes from {graph}, except for any descendant Incremental Data Record nodes with other parents. - - Let {hasNext} be {false} if {graph} is empty; otherwise, {true}. - - Yield an unordered map containing {completed} and {hasNext}. + - Yield the result of {GetIncrementalResult(graph, completed)}. - Continue to the next completed Pending Incremental Data node. - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. @@ -447,15 +446,15 @@ GetPendingEntry(pendingResults): - Append {pendingEntry} to {pending}. - Return {pending}. -GetIncrementalResult(graph, incremental, completed, pending): +GetIncrementalResult(graph, completed, incremental, pending): - Let {hasNext} be {false} if {graph} is empty, otherwise, {true}. - Let {incrementalResult} be an unordered map containing {hasNext}. -- If {incremental} is not empty: +- If {incremental} is provided and not empty: - Set the corresponding entry on {incrementalResult} to {incremental}. - If {completed} is not empty: - Set the corresponding entry on {incrementalResult} to {completed}. -- If {pending} is not empty: +- If {pending} is provided and not empty: - Set the corresponding entry on {incrementalResult} to {pending}. - Return {incrementalResult}. From 6862323378d98e0ae38f6d6aaf99215c7bad660e Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 10:52:10 +0300 Subject: [PATCH 35/39] add descriptions and fix bug within GetNewRootNodes, it needs the old root nodes before the graph was adjusted --- spec/Section 6 -- Execution.md | 91 ++++++++++++++++++++++++++++------ 1 file changed, 77 insertions(+), 14 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index abc7924f6..c1e6a151e 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -354,14 +354,50 @@ incremental portion of the Execution Plan. ### Yielding Incremental Results The procedure for yielding incremental results is specified by the -{YieldIncrementalResults()} algorithm. +{YieldIncrementalResults()} algorithm. The incremental state is stored within a +graph, with root nodes representing the currently pending delivery groups. + +For example, given the following operation: + +```graphql example +{ + ...SlowFragment @defer + fastField +} + +fragment SlowFragment on Query { + ...SlowestFragment @defer + slowField +} + +fragment SlowestFragment on Query { + slowestField +} +``` + +A valid GraphQL executor deferring `SlowFragment` must include a `pending` entry +to that effect within the initial result, while the `pending` entry for +`SlowestFragment` should be delivered together with `SlowFragment`. + +Delivery group nodes may have three different types of child nodes: + +1. Other delivery group nodes, i.e. the node representing `SlowFragment` should + have a child node representing `SlowestFragment`. +2. Pending incremental data nodes, i.e. the node for `SlowFragment` should + initially have a node for `slowField`. +3. Completed incremental data nodes, i.e. when `slowField` is completed, the + pending incremental data node for `slowField` should be replaced with a node + representing the completed data. + +The {YieldIncrementalResults()} algorithm is responsible for updating the graph +as it yields the incremental results. YieldIncrementalResults(data, errors, incrementalDataRecords): - Let {graph} be the result of {GraphFromRecords(incrementalDataRecords)}. - Let {rootNodes} be the result of {GetNewRootNodes(graph)}. - Update {graph} to the subgraph rooted at nodes in {rootNodes}. -- Yield the result of {GetInitialResult(data, errors, pendingResults)}. +- Yield the result of {GetInitialResult(data, errors, rootNodes)}. - For each completed child Pending Incremental Data node of a root node in {graph}: - Let {incrementalDataRecord} be the Pending Incremental Data for that node; @@ -373,7 +409,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Append {GetCompletedEntry(parent, errors)} to {completed}. - Remove {node} and all of its descendant nodes from {graph}, except for any descendant Incremental Data Record nodes with other parents. - - Yield the result of {GetIncrementalResult(graph, completed)}. + - Yield the result of {GetSubsequentResult(graph, completed)}. - Continue to the next completed Pending Incremental Data node. - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. @@ -397,11 +433,11 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Append {GetCompletedEntry(completedDeferredFragment)} to {completed}. - Remove {completedDeferredFragment} from {graph}, promoting its child Deferred Fragment nodes to root nodes. - - Let {newRootNodes} be the result of {GetNewRootNodes(graph)}. + - Let {newRootNodes} be the result of {GetNewRootNodes(graph, rootNodes)}. - Add all nodes in {newRootNodes} to {rootNodes}. - Update {graph} to the subgraph rooted at nodes in {rootNodes}. - Let {pending} be the result of {GetPendingEntry(newRootNodes)}. - - Yield the result of {GetIncrementalResult(graph, incremental, completed, + - Yield the result of {GetSubsequentResult(graph, incremental, completed, pending)}. - Complete this incremental result stream. @@ -418,17 +454,28 @@ GraphFromRecords(incrementalDataRecords, graph): to {newGraph}, or the {parent} is not defined. - Return {newGraph}. -GetNewRootNodes(graph): +The {GetNewRootNodes()} algorithm is responsible for determining the new root +nodes that must be reported as pending. Any delivery groups without any +execution groups should not be reported as pending, and any child delivery +groups for these "empty" delivery groups should be reported as pending in their +stead. + +GetNewRootNodes(graph, oldRootNodes): -- Initialize {newPendingResults} to the empty set. +- Initialize {newRootNodes} to the empty set. - Initialize {rootNodes} to the set of root nodes in {graph}. - For each {rootNode} of {rootNodes}: - If {rootNode} has no children Pending Incremental Data nodes: - Let {children} be the set of child Deferred Fragment nodes of {rootNode}. - Add each of the nodes in {children} to {rootNodes}. - Continue to the next {rootNode} of {rootNodes}. - - Add {rootNode} to {newPendingResults}. -- Return {newPendingResults}. + - If {oldRootNodes} does not contain {rootNode}, add {rootNode} to + {newRootNodes}. +- Return {newRootNodes}. + +Formatting of the initial result is defined by the {GetInitialResult()} +algorithm. It will only be called when there is an incremental result stream, +and so `hasNext` will always be set to {true}. GetInitialResult(data, errors, pendingResults): @@ -436,17 +483,26 @@ GetInitialResult(data, errors, pendingResults): - Let {hasNext} be {true}. - Return an unordered map containing {data}, {errors}, {pending}, and {hasNext}. -GetPendingEntry(pendingResults): +Formatting the `pending` of initial and subsequentResults is defined by the +{GetPendingEntry()} algorithm. Given a set of new root nodes added to the graph, +{GetPendingEntry()} returns a list of formatted `pending` entries. + +GetPendingEntry(newRootNodes): - Initialize {pending} to an empty list. -- For each {pendingResult} of {pendingResult}: - - Let {id} be a unique identifier for {pendingResult}. - - Let {path} and {label} be the corresponding entries on {pendingResult}. +- For each {newRootNode} of {newRootNodes}: + - Let {id} be a unique identifier for {newRootNode}. + - Let {path} and {label} be the corresponding entries on {newRootNode}. - Let {pendingEntry} be an unordered map containing {id}, {path}, and {label}. - Append {pendingEntry} to {pending}. - Return {pending}. -GetIncrementalResult(graph, completed, incremental, pending): +Formatting of subsequent incremental results is defined by the +{GetSubsequentResult()} algorithm. Given the current graph, and any `completed`, +`incremental`, and `pending` entries, it produces an appropriately formatted +subsequent incremental response. + +GetSubsequentResult(graph, completed, incremental, pending): - Let {hasNext} be {false} if {graph} is empty, otherwise, {true}. - Let {incrementalResult} be an unordered map containing {hasNext}. @@ -458,6 +514,10 @@ GetIncrementalResult(graph, completed, incremental, pending): - Set the corresponding entry on {incrementalResult} to {pending}. - Return {incrementalResult}. +Formatting of subsequent incremental results is defined by the +{GetSubsequentResult()} algorithm. Execution groups are tagged with the `id` and +`subPath` combination optimized to produce the shortest `subPath`. + GetIncrementalEntry(incrementalDataRecord, graph): - Let {deferredFragments} be the Deferred Fragments incrementally completed by @@ -473,6 +533,9 @@ GetIncrementalEntry(incrementalDataRecord, graph): - Let {id} be the unique identifier for {bestDeferredFragment}. - Return an unordered map containing {id}, {subPath}, {data}, and {errors}. +Formatting of completed incremental results is defined by the +{GetCompletedEntry()} algorithm. + GetCompletedEntry(pendingResult, errors): - Let {id} be the unique identifier for {pendingResult}. From 8dad2e87434b7623cae4459fb4c683fe42901da6 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 11:06:01 +0300 Subject: [PATCH 36/39] finish addressing review comments --- spec/Section 6 -- Execution.md | 44 ++++++++++++++++++++++++++-------- 1 file changed, 34 insertions(+), 10 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index c1e6a151e..adf55e936 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -381,16 +381,18 @@ to that effect within the initial result, while the `pending` entry for Delivery group nodes may have three different types of child nodes: -1. Other delivery group nodes, i.e. the node representing `SlowFragment` should +1. Child Delivery Group nodes, i.e. the node representing `SlowFragment` should have a child node representing `SlowestFragment`. -2. Pending incremental data nodes, i.e. the node for `SlowFragment` should +2. Pending Incremental Data nodes, i.e. the node for `SlowFragment` should initially have a node for `slowField`. -3. Completed incremental data nodes, i.e. when `slowField` is completed, the +3. Completed Incremental Data nodes, i.e. when `slowField` is completed, the pending incremental data node for `slowField` should be replaced with a node representing the completed data. The {YieldIncrementalResults()} algorithm is responsible for updating the graph -as it yields the incremental results. +as it yields the incremental results. When a delivery group contains only +completed incremental data nodes, the group is removed from the graph as it is +delivered. YieldIncrementalResults(data, errors, incrementalDataRecords): @@ -416,7 +418,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Let {resultIncrementalDataRecords} be {incrementalDataRecords} on {result}. - Update {graph} to {GraphFromRecords(resultIncrementalDataRecords, graph)}. - Let {completedDeferredFragments} be the set of root nodes in {graph} without - any child Pending Data nodes. + any child Pending Incremental Data nodes. - Let {completedIncrementalDataNodes} be the set of completed Incremental Data nodes that are children of {completedDeferredFragments}. - If {completedIncrementalDataNodes} is empty, continue to the next completed @@ -441,17 +443,39 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): pending)}. - Complete this incremental result stream. +New Incremental Data Records are added to the {graph} by the +{GraphFromRecords()} algorithm as Pending Incremental Data nodes directed from +the Deferred Fragments they incrementally complete. + GraphFromRecords(incrementalDataRecords, graph): - If {graph} is not provided, initialize to an empty graph. - Let {newGraph} be a new directed acyclic graph containing all of the nodes and edges in {graph}. - For each {incrementalDataRecord} of {incrementalDataRecords}: - - Add {incrementalDataRecord} to {newGraph} as a new Pending Data node - directed from the {pendingResults} that it completes, adding each of - {pendingResults} to {newGraph} as a new node directed from its {parent}, - recursively adding each {parent} until {incrementalDataRecord} is connected - to {newGraph}, or the {parent} is not defined. + - Let {deferredFragments} be the Deferred Fragments incrementally completed by + {incrementalDataRecord}. + - For each {deferredFragment} of {deferredFragments}: + - Reset {newGraph} to the result of + {GraphWithDeferredFragmentRecord(deferredFragment, newGraph)}. + - Add {incrementalDataRecord} to {newGraph} as a new Pending Incremental Data + node directed from the {deferredFragments} that it completes. +- Return {newGraph}. + +The {GraphWithDeferredFragmentRecord()} algorithm returns a new graph containing +the provided Deferred Fragment Record, recursively adding its parent Deferred +Fragment nodes. + +GraphWithDeferredFragmentRecord(deferredFragment, graph): + +- If {graph} contains a Deferred Fragment node representing {deferredFragment}, + return {graph}. +- Let {parent} be the parent Deferred Fragment of {deferredFragment}. +- If {parent} is defined, let {newGraph} be the result of + {GraphWithDeferredFragmentRecord(parent, graph)}; otherwise, let {newGraph} be + a new directed acyclic graph containing all of the nodes and edges in {graph}. +- Add {deferredFragment} to {newGraph} as a new Deferred Fragment node directed + from {parent}, if defined. - Return {newGraph}. The {GetNewRootNodes()} algorithm is responsible for determining the new root From fb94bc6ee7255407f8325ab5920b4f41ef8add93 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 6 Sep 2024 11:09:44 +0300 Subject: [PATCH 37/39] add missing word --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index adf55e936..196d75434 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -507,9 +507,9 @@ GetInitialResult(data, errors, pendingResults): - Let {hasNext} be {true}. - Return an unordered map containing {data}, {errors}, {pending}, and {hasNext}. -Formatting the `pending` of initial and subsequentResults is defined by the -{GetPendingEntry()} algorithm. Given a set of new root nodes added to the graph, -{GetPendingEntry()} returns a list of formatted `pending` entries. +Formatting the `pending` entries of initial and subsequentResults is defined by +the {GetPendingEntry()} algorithm. Given a set of new root nodes added to the +graph, {GetPendingEntry()} returns a list of formatted `pending` entries. GetPendingEntry(newRootNodes): From 99107f02b9ef34bd091c2a3398338d697d773b5d Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Wed, 18 Sep 2024 13:43:53 -0400 Subject: [PATCH 38/39] Add Response Section for defer/stream (#4) * Add Response Section for defer/stream * review fixes * fix --- spec/Section 7 -- Response.md | 206 ++++++++++++++++++++++++++++++++-- 1 file changed, 195 insertions(+), 11 deletions(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index bd9448293..2a11c78da 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -10,7 +10,12 @@ the case that any _field error_ was raised on a field and was replaced with ## Response Format -A response to a GraphQL request must be a map. +A response to a GraphQL request must be a map or a stream of incrementally +delivered results. The response will be a stream of incrementally delivered +results when the GraphQL service has deferred or streamed data as a result of +the `@defer` or `@stream` directives. When the response of the GraphQL operation +contains incrementally delivered results, the first value will be an initial +payload, followed by one or more subsequent payloads. If the request raised any errors, the response map must contain an entry with key `errors`. The value of this entry is described in the "Errors" section. If @@ -22,14 +27,31 @@ key `data`. The value of this entry is described in the "Data" section. If the request failed before execution, due to a syntax error, missing information, or validation error, this entry must not be present. +When the response of the GraphQL operation contains incrementally delivered +results, both the initial payload and all subsequent payloads must contain an +entry with key `hasNext`. The value of this entry must be {true} for all but the +last response in the stream. The value of this entry must be {false} for the +last response of the stream. This entry must not be present for GraphQL +operations that return a single response map. + +When the response of the GraphQL operation contains incrementally delivered +results, both the initial payload and any subsequent payloads may contain +entries with the keys `pending`, `incremental`, and/or `completed`. The value of +these entries are described in the "Pending", "Incremental", and "Completed" +sections below. + The response map may also contain an entry with key `extensions`. This entry, if set, must have a map as its value. This entry is reserved for implementers to extend the protocol however they see fit, and hence there are no additional -restrictions on its contents. +restrictions on its contents. When the response of the GraphQL operation is a +response stream, the initial payload and any subsequent payloads may contain an +entry with the key `extensions`, also reserved for implementers to extend the +protocol however they see fit. Additionally, implementers may send subsequent +payloads containing only `hasNext` and `extensions` entries. To ensure future changes to the protocol do not break existing services and clients, the top level response map must not contain any entries other than the -three described above. +entries described above. Note: When `errors` is present in the response, it may be helpful for it to appear first when serialized to make it more clear when errors are present in a @@ -48,6 +70,10 @@ present in the result. If an error was raised during the execution that prevented a valid response, the `data` entry in the response should be `null`. +When the response of the GraphQL operation contains incrementally delivered +results, `data` may only be present in the initial payload. `data` must not be +present in any subsequent payloads. + ### Errors The `errors` entry in the response is a non-empty list of errors raised during @@ -107,14 +133,8 @@ syntax element. If an error can be associated to a particular field in the GraphQL result, it must contain an entry with the key `path` that details the path of the response field which experienced the error. This allows clients to identify whether a -`null` result is intentional or caused by a runtime error. - -If present, this field must be a list of path segments starting at the root of -the response and ending with the field associated with the error. Path segments -that represent fields must be strings, and path segments that represent list -indices must be 0-indexed integers. If the error happens in an aliased field, -the path to the error must use the aliased name, since it represents a path in -the response, not in the request. +`null` result is intentional or caused by a runtime error. The value of this +field is described in the [Path](#sec-Path) section. For example, if fetching one of the friends' names fails in the following operation: @@ -244,6 +264,170 @@ discouraged. } ``` +### Path + +A `path` field allows for the association to a particular field in a GraphQL +result. This field should be a list of path segments starting at the root of the +response and ending with the field to be associated with. Path segments that +represent fields should be strings, and path segments that represent list +indices should be 0-indexed integers. If the path is associated to an aliased +field, the path should use the aliased name, since it represents a path in the +response, not in the request. + +When the `path` field is present on an "Error result", it indicates the response +field which experienced the error. + +### Pending + +The `pending` entry in the response is a non-empty list of Pending Results. If +the response of the GraphQL operation contains incrementally delivered results, +this field may appear on both the initial and subsequent payloads. If present, +the `pending` entry must contain at least one Pending Result. + +Each Pending Result corresponds to either a `@defer` or `@stream` directive +located at a specific path in the response data. The Pending Result is used to +communicate that the GraphQL service has chosen to incrementally deliver the +data associated with this `@defer` or `@stream` directive and clients should +expect the associated data in either the current payload, or one of the +following payloads. + +**Pending Result Format** + +Every Pending Result must contain an entry with the key `id` with a string +value. This `id` should be used by clients to correlate Pending Results with +Completed Results. The `id` value must be unique for the entire response stream. +There must not be any other Pending Result in any payload that contains the same +`id`. + +Every Pending Result must contain an entry with the key `path`. When the Pending +Result is associated with a `@stream` directive, it indicates the response list +field that is not known to be complete. Clients should expect the GraphQL +Service to incrementally deliver the remainder of indicated list field. When the +Pending Result is associated with a `@defer` directive, it indicates that the +response fields contained in the deferred fragment are not known to be complete. +Clients should expect the the GraphQL Service to incrementally deliver the +remainder of the fields contained in the deferred fragment. + +If a Pending Result is not returned for a `@defer` or `@stream` directive, +clients must assume that the GraphQL service chose not to incrementally deliver +this data, and the data can be found either in the `data` entry in the initial +payload, or one of the Incremental Results in a prior payload. + +### Incremental + +The `incremental` entry in the response is a non-empty list of Incremental +Results. If the response of the GraphQL operation contains incrementally +delivered results, this field may appear on both the initial and subsequent +values. If present, the `incremental` entry must contain at least one +Incremental Result. + +The Incremental Result is used to deliver data that the GraphQL service has +chosen to incrementally deliver. An Incremental Result may be ether an +Incremental List Result or an Incremental Object Result. + +An Incremental List Result is used to deliver additional list items for a list +field with a `@stream` directive. + +An Incremental Object Result is used to deliver additional response fields that +were contained in one or more fragments with a `@defer` directive. + +**Incremental Result Format** + +Every Incremental Result must contain an entry with the key `id` with a string +value. This `id` must match the `id` that was returned in a prior Pending +Result. + +Additionally, Incremental List Results and Incremental Object Results have +further requirements. + +**Incremental List Result Format** + +An Incremental List Result's `id` entry must match the `id` that was returned in +a prior Pending Result. This Pending Result must be associated with a `@stream` +directive. + +The Incremental List Result's `path` can be determined using the prior Pending +Result with the same `id` as this Incremental Result. The Incremental List +Result's `path` is the same as the Pending Result's `path`. + +Every Incremental List Result must contain an `items` entry. The `items` entry +must contain a list of additional list items for the response field at the +Incremental List Result's `path`. This output will be a list of the same type of +the response field at this path. + +If any field errors were raised during the execution of the results in `items` +and these errors bubbled to a path higher than the Incremental List Result's +path, The Incremental List Result is considered failed and should not be +included in the response stream. The errors that caused this failure will be +included in a Completed Result. + +If any field errors were raised during the execution of the results in `items` +and these errors did not bubble to a path higher than the Incremental List +Result's path, the Incremental List Result must contain an entry with key +`errors` containing these field errors. The value of this entry is described in +the "Errors" section. + +**Incremental Object Result Format** + +An Incremental List Result's `id` entry must match the `id` that was returned in +a prior Pending Result. This Pending Result must be associated with a `@defer` +directive. + +The Incremental Object Result's `path` can be determined using the prior Pending +Result with the same `id` as this Incremental Result. The Incremental Object +Result may contain a `subPath` entry. If the `subPath` entry is present, The +Incremental Object Record's path can be determined by concatenating the Pending +Result's `path` with this `subPath`. If no `subPath` entry is present, the path +is the same as the Pending Result's `path`. + +Every Incremental Object Result must contain an `data` entry. The `data` entry +must contain a map of additional response fields. The `data` entry in an +Incremental Object Result will be of the type of a particular field in the +GraphQL result. The Incremental Object Result's `path` will contain the path +segments of the field this data is associated with. + +An Incremental Object Result's data may contain response fields that were +contained in more than one deferred fragments. In that case, the `id` of the +Incremental Object Result must point to the Pending Result that results in the +shortest `subPath`. + +If any field errors were raised during the execution of the results in `data` +and these errors bubbled to a path higher than the Incremental Object Result's +path, The Incremental Object Result is considered failed and should not be +included in the response stream. The errors that caused this failure will be +included in a Completed Result. + +If any field errors were raised during the execution of the results in `data` +and these errors did not bubble to a path higher than the Incremental Object +Result's path, the Incremental Object Result must contain an entry with key +`errors` containing these field errors. The value of this entry is described in +the "Errors" section. + +### Completed + +The `completed` entry in the response is a non-empty list of Completed Results. +If the response of the GraphQL operation contains incrementally delivered +results, this field may appear on both the initial and subsequent payloads. If +present, the `completed` entry must contain at least one Completed Result. + +Each Completed Result corresponds to a prior Pending Result. The Completed +Result is used to communicate that the GraphQL service has completed the +incremental delivery of the data associated with the corresponding Pending +Result. The associated data must have been completed in the current payload. + +**Completed Result Format** + +Every Completed Result must contain an entry with the key `id` with a string +value. The `id` entry must match the `id` that was returned in a prior Pending +Result. + +A Completed Result may contain an `errors` entry. When the `errors` entry is +present, it informs clients that the delivery of the data associated with the +corresponding Pending Result has failed, due to an error bubbling to a path +higher than the Incremental Data Result's path. The `errors` entry must contain +these field errors. The value of this entry is described in the "Errors" +section.e + ## Serialization Format GraphQL does not require a specific serialization format. However, clients From 3ef628c033a390563f8256514441b63b6f78c92a Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Wed, 18 Sep 2024 13:51:10 -0400 Subject: [PATCH 39/39] Add directives and validation sections (#5) * Add defer and stream directives to type system * Add defer/stream validation rules --- cspell.yml | 1 + spec/Section 3 -- Type System.md | 106 +++++++++++++++++- spec/Section 5 -- Validation.md | 179 +++++++++++++++++++++++++++++++ 3 files changed, 284 insertions(+), 2 deletions(-) diff --git a/cspell.yml b/cspell.yml index 0ea1def96..ec8eaabc2 100644 --- a/cspell.yml +++ b/cspell.yml @@ -4,6 +4,7 @@ ignoreRegExpList: - /[a-z]{2,}'s/ words: # Terms of art + - deprioritization - endianness - interoperation - monospace diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index 4f9356f0c..d394121c3 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -794,8 +794,9 @@ And will yield the subset of each object type queried: When querying an Object, the resulting mapping of fields are conceptually ordered in the same order in which they were encountered during execution, excluding fragments for which the type does not apply and fields or fragments -that are skipped via `@skip` or `@include` directives. This ordering is -correctly produced when using the {CollectFields()} algorithm. +that are skipped via `@skip` or `@include` directives or temporarily skipped via +`@defer`. This ordering is correctly produced when using the {CollectFields()} +algorithm. Response serialization formats capable of representing ordered maps should maintain this ordering. Serialization formats which can only represent unordered @@ -1942,6 +1943,11 @@ by a validator, executor, or client tool such as a code generator. GraphQL implementations should provide the `@skip` and `@include` directives. +GraphQL implementations are not required to implement the `@defer` and `@stream` +directives. If either or both of these directives are implemented, they must be +implemented according to this specification. GraphQL implementations that do not +support these directives must not make them available via introspection. + GraphQL implementations that support the type system definition language must provide the `@deprecated` directive if representing deprecated portions of the schema. @@ -2162,3 +2168,99 @@ to the relevant IETF specification. ```graphql example scalar UUID @specifiedBy(url: "https://tools.ietf.org/html/rfc4122") ``` + +### @defer + +```graphql +directive @defer( + label: String + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT +``` + +The `@defer` directive may be provided for fragment spreads and inline fragments +to inform the executor to delay the execution of the current fragment to +indicate deprioritization of the current fragment. A query with `@defer` +directive will cause the request to potentially return multiple responses, where +non-deferred data is delivered in the initial response and data deferred is +delivered in a subsequent response. `@include` and `@skip` take precedence over +`@defer`. + +```graphql example +query myQuery($shouldDefer: Boolean) { + user { + name + ...someFragment @defer(label: "someLabel", if: $shouldDefer) + } +} +fragment someFragment on User { + id + profile_picture { + uri + } +} +``` + +#### @defer Arguments + +- `if: Boolean! = true` - When `true`, fragment _should_ be deferred (see + related note below). When `false`, fragment will not be deferred and data will + be included in the initial response. Defaults to `true` when omitted. +- `label: String` - May be used by GraphQL clients to identify the data from + responses and associate it with the corresponding defer directive. If + provided, the GraphQL service must add it to the corresponding pending object + in the response. `label` must be unique label across all `@defer` and + `@stream` directives in a document. `label` must not be provided as a + variable. + +### @stream + +```graphql +directive @stream( + label: String + if: Boolean! = true + initialCount: Int = 0 +) on FIELD +``` + +The `@stream` directive may be provided for a field of `List` type so that the +backend can leverage technology such as asynchronous iterators to provide a +partial list in the initial response, and additional list items in subsequent +responses. `@include` and `@skip` take precedence over `@stream`. + +```graphql example +query myQuery($shouldStream: Boolean) { + user { + friends(first: 10) { + nodes @stream(label: "friendsStream", initialCount: 5, if: $shouldStream) + } + } +} +``` + +#### @stream Arguments + +- `if: Boolean! = true` - When `true`, field _should_ be streamed (see related + note below). When `false`, the field will not be streamed and all list items + will be included in the initial response. Defaults to `true` when omitted. +- `label: String` - May be used by GraphQL clients to identify the data from + responses and associate it with the corresponding stream directive. If + provided, the GraphQL service must add it to the corresponding pending object + in the response. `label` must be unique label across all `@defer` and + `@stream` directives in a document. `label` must not be provided as a + variable. +- `initialCount: Int` - The number of list items the service should return as + part of the initial response. If omitted, defaults to `0`. A field error will + be raised if the value of this argument is less than `0`. + +Note: The ability to defer and/or stream parts of a response can have a +potentially significant impact on application performance. Developers generally +need clear, predictable control over their application's performance. It is +highly recommended that GraphQL services honor the `@defer` and `@stream` +directives on each execution. However, the specification allows advanced use +cases where the service can determine that it is more performant to not defer +and/or stream. Therefore, GraphQL clients _must_ be able to process a response +that ignores the `@defer` and/or `@stream` directives. This also applies to the +`initialCount` argument on the `@stream` directive. Clients _must_ be able to +process a streamed response that contains a different number of initial list +items than what was specified in the `initialCount` argument. diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index 969d99f88..66928ea1c 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -422,6 +422,7 @@ FieldsInSetCanMerge(set): {set} including visiting fragments and inline fragments. - Given each pair of members {fieldA} and {fieldB} in {fieldsForName}: - {SameResponseShape(fieldA, fieldB)} must be true. + - {SameStreamDirective(fieldA, fieldB)} must be true. - If the parent types of {fieldA} and {fieldB} are equal or if either is not an Object Type: - {fieldA} and {fieldB} must have identical field names. @@ -456,6 +457,16 @@ SameResponseShape(fieldA, fieldB): - If {SameResponseShape(subfieldA, subfieldB)} is {false}, return {false}. - Return {true}. +SameStreamDirective(fieldA, fieldB): + +- If neither {fieldA} nor {fieldB} has a directive named `stream`. + - Return {true}. +- If both {fieldA} and {fieldB} have a directive named `stream`. + - Let {streamA} be the directive named `stream` on {fieldA}. + - Let {streamB} be the directive named `stream` on {fieldB}. + - If {streamA} and {streamB} have identical sets of arguments, return {true}. +- Return {false}. + Note: In prior versions of the spec the term "composite" was used to signal a type that is either an Object, Interface or Union type. @@ -1521,6 +1532,174 @@ query ($foo: Boolean = true, $bar: Boolean = false) { } ``` +### Defer And Stream Directives Are Used On Valid Root Field + +** Formal Specification ** + +- For every {directive} in a document. +- Let {directiveName} be the name of {directive}. +- Let {mutationType} be the root Mutation type in {schema}. +- Let {subscriptionType} be the root Subscription type in {schema}. +- If {directiveName} is "defer" or "stream": + - The parent type of {directive} must not be {mutationType} or + {subscriptionType}. + +**Explanatory Text** + +The defer and stream directives are not allowed to be used on root fields of the +mutation or subscription type. + +For example, the following document will not pass validation because `@defer` +has been used on a root mutation field: + +```raw graphql counter-example +mutation { + ... @defer { + mutationField + } +} +``` + +### Defer And Stream Directives Are Used On Valid Operations + +** Formal Specification ** + +- Let {subscriptionFragments} be the empty set. +- For each {operation} in a document: + - If {operation} is a subscription operation: + - Let {fragments} be every fragment referenced by that {operation} + transitively. + - For each {fragment} in {fragments}: + - Let {fragmentName} be the name of {fragment}. + - Add {fragmentName} to {subscriptionFragments}. +- For every {directive} in a document: + - If {directiveName} is not "defer" or "stream": + - Continue to the next {directive}. + - Let {ancestor} be the ancestor operation or fragment definition of + {directive}. + - If {ancestor} is a fragment definition: + - If the fragment name of {ancestor} is not present in + {subscriptionFragments}: + - Continue to the next {directive}. + - If {ancestor} is not a subscription operation: + - Continue to the next {directive}. + - Let {if} be the argument named "if" on {directive}. + - {if} must be defined. + - Let {argumentValue} be the value passed to {if}. + - {argumentValue} must be a variable, or the boolean value "false". + +**Explanatory Text** + +The defer and stream directives can not be used to defer or stream data in +subscription operations. If these directives appear in a subscription operation +they must be disabled using the "if" argument. This rule will not permit any +defer or stream directives on a subscription operation that cannot be disabled +using the "if" argument. + +For example, the following document will not pass validation because `@defer` +has been used in a subscription operation with no "if" argument defined: + +```raw graphql counter-example +subscription sub { + newMessage { + ... @defer { + body + } + } +} +``` + +### Defer And Stream Directive Labels Are Unique + +** Formal Specification ** + +- Let {labelValues} be an empty set. +- For every {directive} in the document: + - Let {directiveName} be the name of {directive}. + - If {directiveName} is "defer" or "stream": + - For every {argument} in {directive}: + - Let {argumentName} be the name of {argument}. + - Let {argumentValue} be the value passed to {argument}. + - If {argumentName} is "label": + - {argumentValue} must not be a variable. + - {argumentValue} must not be present in {labelValues}. + - Append {argumentValue} to {labelValues}. + +**Explanatory Text** + +The `@defer` and `@stream` directives each accept an argument "label". This +label may be used by GraphQL clients to uniquely identify response payloads. If +a label is passed, it must not be a variable and it must be unique within all +other `@defer` and `@stream` directives in the document. + +For example the following document is valid: + +```graphql example +{ + dog { + ...fragmentOne + ...fragmentTwo @defer(label: "dogDefer") + } + pets @stream(label: "petStream") { + name + } +} + +fragment fragmentOne on Dog { + name +} + +fragment fragmentTwo on Dog { + owner { + name + } +} +``` + +For example, the following document will not pass validation because the same +label is used in different `@defer` and `@stream` directives.: + +```raw graphql counter-example +{ + dog { + ...fragmentOne @defer(label: "MyLabel") + } + pets @stream(label: "MyLabel") { + name + } +} + +fragment fragmentOne on Dog { + name +} +``` + +### Stream Directives Are Used On List Fields + +**Formal Specification** + +- For every {directive} in a document. +- Let {directiveName} be the name of {directive}. +- If {directiveName} is "stream": + - Let {adjacent} be the AST node the directive affects. + - {adjacent} must be a List type. + +**Explanatory Text** + +GraphQL directive locations do not provide enough granularity to distinguish the +type of fields used in a GraphQL document. Since the stream directive is only +valid on list fields, an additional validation rule must be used to ensure it is +used correctly. + +For example, the following document will only pass validation if `field` is +defined as a List type in the associated schema. + +```graphql counter-example +query { + field @stream(initialCount: 0) +} +``` + ## Variables ### Variable Uniqueness