Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issue42 Remove references to coordinator endpoints from the LRAClient API #68

Closed
wants to merge 1 commit into from

Conversation

xstefank
Copy link
Member

@xstefank xstefank commented Dec 12, 2018

issue #42

Signed-off-by: xstefank [email protected]

The specification text will be adjusted as a part of the fix for issue #62 with the move to microprofile config.

@xstefank
Copy link
Member Author

@mmusgrov Just to clarify my intent here - we don't want to discourage any implementation to base LRA on decentralized self-organized solution, however, this PR requires that all implementations based on the orchestration will be configured in the same way. Is it ok to fix the issue?

@@ -120,14 +119,9 @@ private static void initTck(LRAClient lraClient) {
Thread.sleep(1000);
}

int servicePort = Integer.getInteger("service.http.port", TEST_SWARM_PORT);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"service.http.port" is the config for where the TCK test resources (ActivityController etc) are deployed. I know that in the Narayana case our coordinator and these resources are both deployed to the same server and port but they don't need to be. Keeping it configurable would allow another vendor to run the TCK resources separately from whatever mechanism they use to configure their LRAClient implementation.

But clearly we would need a better config names, maybe something like:

@Inject @ConfigProperty(name="lra.tck.http.port", defaultValue="8080")
private int servicePort;

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mmusgrov ok, I will revert this line.

micrserviceBaseUrl = new URL(String.format("http://localhost:%d", servicePort));
rcBaseUrl = new URL(String.format("http://%s:%d/%s", rcHost, rcPort, rcPath));
String dafaultUrl = "http://localhost:8080";
micrserviceBaseUrl = new URL(System.getProperty(LRA_HTTP_URL, dafaultUrl));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my other comment about the use of the service.http.port property

* Key for looking up the config property that specifies which host a
* coordinator is running on
* The config property that specifies a URL exposed for LRA orchestration and
* management. An implementation is not required to make use of this property.
*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the intent to document this property properly in issue #62 (Interoperability: Clients running on different MP platforms need to know how to start and end an LRA). If so will you add a note to the issue description and a link to issue #62 .

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will do. However, for the javadoc is this sufficient? Please see my comment in the PR - #68 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bit of config is so that the TCK knows when the implementation has recovered. We briefly discussed this yesterday ("Are we going to specify recovery semantics in the spec? ")

We said that we would propose and SPI that implementations would implement in order to pass the TCK. In the LRA hangout a week last Monday I proposed something like:

public ManagementSPI {
LRAInfo getStatus(LRA lra); // get status of one LRA
Collection<LRAInfo> getLRAs(CompensatorStatus status); // list LRAs in the given state
//anything else
}

So to get recovering LRAs the status would be set to Completing or Compensating.

This would mean we would not need to configure the endpoint in the TCK.

@mmusgrov mmusgrov added the Advertise PR needs to advertised to the community on Wed to give everyone an opportunity to provide input label Dec 13, 2018
@xstefank
Copy link
Member Author

I've decided to close this PR in favor of the transition to MP config property configuration handling (#62)

@xstefank xstefank closed this Dec 13, 2018
@xstefank
Copy link
Member Author

After a deeper investigation of the issue #62 MP config inclusion part, it seems that the fix for #62 will not affect the spec code only text part and the TCK. For this reason, I think this PR has still value in merging as it will decrease number of properties that needs to be configured in the TCK and described in the spec text.

@xstefank xstefank reopened this Dec 13, 2018
@mmusgrov mmusgrov added stalled The PR is blocked by some condition and removed Advertise PR needs to advertised to the community on Wed to give everyone an opportunity to provide input labels Jan 7, 2019
@xstefank xstefank changed the title issue42 Remove references to coordinator endpoints from the LRAClent API issue42 Remove references to coordinator endpoints from the LRAClient API Mar 11, 2019
@mmusgrov
Copy link
Contributor

mmusgrov commented Apr 24, 2019

The java API component of the PR is not part of 1.0.
The other references to coordinators are discussed in issue #139 (#141). If that PR is merged I think it is safe to close this PR (without a merge).

@xstefank
Copy link
Member Author

@mmusgrov this has been stalled for far too long. If your new PR also covers items in issue #42 go ahead and close this one.

@mmusgrov
Copy link
Contributor

Closing since issue #139 is now merged

@mmusgrov mmusgrov closed this Apr 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stalled The PR is blocked by some condition
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants