diff --git a/docs/build/reference/python/apis/77-WorkstreamPatternEngineApi.md b/docs/build/reference/python/apis/77-WorkstreamPatternEngineApi.md index 17a7ed681..a9c4df7c6 100644 --- a/docs/build/reference/python/apis/77-WorkstreamPatternEngineApi.md +++ b/docs/build/reference/python/apis/77-WorkstreamPatternEngineApi.md @@ -19,7 +19,7 @@ Method | HTTP request | Description /workstream_pattern_engine/processors/vision/activate [POST] -This will activate your Workstream Pattern Engine. This is used to aggregate information on your user's desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April) +This will activate your Long-Term Memory Engine. This is used to aggregate information on your user's desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April) ### Example {#workstream_pattern_engine_processors_vision_activate-example} @@ -88,7 +88,7 @@ No authorization required /workstream_pattern_engine/processors/vision/data/clear [POST] -This will clear the data for the Workstream Pattern Engine, specifically for our vision data. This boy will accept ranges of time that the user wants to remove the processing from. +This will clear the data for the Long-Term Memory Engine, specifically for our vision data. This boy will accept ranges of time that the user wants to remove the processing from. ### Example {#workstream_pattern_engine_processors_vision_data_clear-example} @@ -155,7 +155,7 @@ No authorization required /workstream_pattern_engine/processors/vision/deactivate [POST] -This will deactivate your Workstream Pattern Engine. This is used to aggregate information on your user's desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April) +This will deactivate your Long-Term Memory Engine. This is used to aggregate information on your user's desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April) ### Example {#workstream_pattern_engine_processors_vision_deactivate-example} @@ -224,7 +224,7 @@ No authorization required /workstream_pattern_engine/processors/vision/status [GET] -This will get a snapshot of the status your Workstream Pattern Engine. This is used to aggregate information on your user's desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April) +This will get a snapshot of the status your Long-Term Memory Engine. This is used to aggregate information on your user's desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April) ### Example {#workstream_pattern_engine_processors_vision_status-example} diff --git a/docs/build/reference/python/models/231-QGPTConversationPipeline.md b/docs/build/reference/python/models/231-QGPTConversationPipeline.md index 5756613f6..293849e57 100644 --- a/docs/build/reference/python/models/231-QGPTConversationPipeline.md +++ b/docs/build/reference/python/models/231-QGPTConversationPipeline.md @@ -4,7 +4,7 @@ title: QGPTConversationPipeline | Python SDK # QGPTConversationPipeline -This model is specifically for QGPT Conversation pipelines, the model is used to group conversational prompts for instance a conversation around generating code. here are some use cases- 1. contextualized_code_generation- This is for users that want to have conversations around generating code, when there is provided context. 2. generalized_code- This is for users that want to have conversations without context around code. 3. contextualized_code- This is for users that want to have conversation around code with Context. 4. contextualized_code_workstream: this is for the users that want to use context as well as WPE information in their chat (we wiil prioritize WPE infomration, but also support other info as well) +This model is specifically for QGPT Conversation pipelines, the model is used to group conversational prompts for instance a conversation around generating code. here are some use cases- 1. contextualized_code_generation- This is for users that want to have conversations around generating code, when there is provided context. 2. generalized_code- This is for users that want to have conversations without context around code. 3. contextualized_code- This is for users that want to have conversation around code with Context. 4. contextualized_code_workstream: this is for the users that want to use context as well as LTME information in their chat (we will prioritize LTME information, but also support other info as well) ## Properties diff --git a/docs/build/reference/python/models/234-QGPTConversationPipelineForContextualizedCodeWorkstreamDialog.md b/docs/build/reference/python/models/234-QGPTConversationPipelineForContextualizedCodeWorkstreamDialog.md index 43753ecd9..c02f83b3c 100644 --- a/docs/build/reference/python/models/234-QGPTConversationPipelineForContextualizedCodeWorkstreamDialog.md +++ b/docs/build/reference/python/models/234-QGPTConversationPipelineForContextualizedCodeWorkstreamDialog.md @@ -4,7 +4,7 @@ title: QGPTConversationPipelineForContextualizedCodeWorkstreamDialog | Python SD # QGPTConversationPipelineForContextualizedCodeWorkstreamDialog -This is for the users that wants to have contextualized code conversations around their workstream materials, meaning conversations around code with Context provided, as well as workstream information ie information gathered from the WPE. This is a class so that we can add optional properties in the future. +This is for the users that wants to have contextualized code conversations around their workstream materials, meaning conversations around code with Context provided, as well as workstream information ie information gathered from the LTME. This is a class so that we can add optional properties in the future. ## Properties diff --git a/docs/build/reference/typescript/apis/71-WorkstreamPatternEngineApi.md b/docs/build/reference/typescript/apis/71-WorkstreamPatternEngineApi.md index 908c58716..6cdac162f 100644 --- a/docs/build/reference/typescript/apis/71-WorkstreamPatternEngineApi.md +++ b/docs/build/reference/typescript/apis/71-WorkstreamPatternEngineApi.md @@ -17,7 +17,7 @@ Method | HTTP request | Description ## **workstreamPatternEngineProcessorsVisionActivate** {#workstreampatternengineprocessorsvisionactivate} > WorkstreamPatternEngineStatus workstreamPatternEngineProcessorsVisionActivate() -This will activate your Workstream Pattern Engine. This is used to aggregate information on your user\'s desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April) +This will activate your Long-Term Memory Engine. This is used to aggregate information on your user\'s desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April) ### Example {#workstreampatternengineprocessorsvisionactivate-example} @@ -65,7 +65,7 @@ Name | Type | Description | Notes ## **workstreamPatternEngineProcessorsVisionDataClear** {#workstreampatternengineprocessorsvisiondataclear} > workstreamPatternEngineProcessorsVisionDataClear() -This will clear the data for the Workstream Pattern Engine, specifically for our vision data. This boy will accept ranges of time that the user wants to remove the processing from. +This will clear the data for the Long-Term Memory Engine, specifically for our vision data. This boy will accept ranges of time that the user wants to remove the processing from. ### Example {#workstreampatternengineprocessorsvisiondataclear-example} @@ -113,7 +113,7 @@ void (empty response body) ## **workstreamPatternEngineProcessorsVisionDeactivate** {#workstreampatternengineprocessorsvisiondeactivate} > WorkstreamPatternEngineStatus workstreamPatternEngineProcessorsVisionDeactivate() -This will deactivate your Workstream Pattern Engine. This is used to aggregate information on your user\'s desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April) +This will deactivate your Long-Term Memory Engine. This is used to aggregate information on your user\'s desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April) ### Example {#workstreampatternengineprocessorsvisiondeactivate-example} @@ -161,7 +161,7 @@ Name | Type | Description | Notes ## **workstreamPatternEngineProcessorsVisionStatus** {#workstreampatternengineprocessorsvisionstatus} > WorkstreamPatternEngineStatus workstreamPatternEngineProcessorsVisionStatus() -This will get a snapshot of the status your Workstream Pattern Engine. This is used to aggregate information on your user\'s desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April) +This will get a snapshot of the status your Long-Term Memory Engine. This is used to aggregate information on your user\'s desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April) ### Example {#workstreampatternengineprocessorsvisionstatus-example} diff --git a/docs/build/reference/typescript/models/236-QGPTConversationPipeline.md b/docs/build/reference/typescript/models/236-QGPTConversationPipeline.md index cc4fe1c39..ff10e5fc2 100644 --- a/docs/build/reference/typescript/models/236-QGPTConversationPipeline.md +++ b/docs/build/reference/typescript/models/236-QGPTConversationPipeline.md @@ -5,7 +5,7 @@ title: QGPTConversationPipeline | TypeScript SDK # QGPTConversationPipeline -This model is specifically for QGPT Conversation pipelines, the model is used to group conversational prompts for instance a conversation around generating code. here are some use cases- 1. contextualized_code_generation- This is for users that want to have conversations around generating code, when there is provided context. 2. generalized_code- This is for users that want to have conversations without context around code. 3. contextualized_code- This is for users that want to have conversation around code with Context. 4. contextualized_code_workstream: this is for the users that want to use context as well as WPE information in their chat (we wiil prioritize WPE infomration, but also support other info as well) +This model is specifically for QGPT Conversation pipelines, the model is used to group conversational prompts for instance a conversation around generating code. here are some use cases- 1. contextualized_code_generation- This is for users that want to have conversations around generating code, when there is provided context. 2. generalized_code- This is for users that want to have conversations without context around code. 3. contextualized_code- This is for users that want to have conversation around code with Context. 4. contextualized_code_workstream: this is for the users that want to use context as well as LTME information in their chat (we will prioritize LTME information, but also support other info as well) ## Properties diff --git a/docs/build/reference/typescript/models/239-QGPTConversationPipelineForContextualizedCodeWorkstreamDialog.md b/docs/build/reference/typescript/models/239-QGPTConversationPipelineForContextualizedCodeWorkstreamDialog.md index aa388b374..5e6fdfec4 100644 --- a/docs/build/reference/typescript/models/239-QGPTConversationPipelineForContextualizedCodeWorkstreamDialog.md +++ b/docs/build/reference/typescript/models/239-QGPTConversationPipelineForContextualizedCodeWorkstreamDialog.md @@ -5,7 +5,7 @@ title: QGPTConversationPipelineForContextualizedCodeWorkstreamDialog | TypeScrip # QGPTConversationPipelineForContextualizedCodeWorkstreamDialog -This is for the users that wants to have contextualized code conversations around their workstream materials, meaning conversations around code with Context provided, as well as workstream information ie information gathered from the WPE. This is a class so that we can add optional properties in the future. +This is for the users that wants to have contextualized code conversations around their workstream materials, meaning conversations around code with Context provided, as well as workstream information ie information gathered from the LTME. This is a class so that we can add optional properties in the future. ## Properties diff --git a/docs/community/events/ama/building-a-more-extensible-development-environment.mdx b/docs/community/events/ama/building-a-more-extensible-development-environment.mdx index b87da6f68..39871e50b 100644 --- a/docs/community/events/ama/building-a-more-extensible-development-environment.mdx +++ b/docs/community/events/ama/building-a-more-extensible-development-environment.mdx @@ -19,7 +19,7 @@ Join us for an exciting live Ask Me Anything (AMA) session where the Pieces team Plus, learn how you can create your own Pieces integration with our multilingual open-source SDKs, example projects, and brand-new documentation site. -Finally, we’ll be giving you a sneak peek into our Workstream Pattern Engine technology which enables you to contextualize your Pieces Copilot from every tool in your workflow, making traditional extensibility a thing of the past. +Finally, we’ll be giving you a sneak peek into our Long-Term Memory Engine technology which enables you to contextualize your Pieces Copilot from every tool in your workflow, making traditional extensibility a thing of the past. Register now and send us your questions ahead of time - your thoughts and opinions mean the world to us! @@ -34,7 +34,7 @@ Register now and send us your questions ahead of time - your thoughts and opinio - **Feature Demo:** Get a first look at the current features, use cases for the extension, and sneak peeks at what’s coming next to elevate your coding experience. - **Open Source Essentials:** Explore our newly minted SDK Docs and hands-on example projects to help get you started. - **Python CLI Agent:** Discover the AI capabilities and opportunities of our new open-source CLI Agent, a new way to interact with your code and context. -- **Workstream Pattern Engine:** Experience the future of AI assistance with our new technology (currently in Beta) that understands your entire workflow. +- **Long-Term Memory Engine:** Experience the future of AI assistance with our new technology (currently in Beta) that understands your entire workflow. ## 💡 Who Should Attend? {#who-should-attend} This AMA is perfect for developers, tech enthusiasts, and anyone keen on enhancing their coding workflow. Whether you're a seasoned pro or just starting out, there's something for everyone. diff --git a/docs/community/events/ama/live-context-security-and-privacy.mdx b/docs/community/events/ama/live-context-security-and-privacy.mdx index 112cd02c6..5a12401d0 100644 --- a/docs/community/events/ama/live-context-security-and-privacy.mdx +++ b/docs/community/events/ama/live-context-security-and-privacy.mdx @@ -1,6 +1,6 @@ --- -title: AMA - Security & Privacy of Live Context in Pieces Copilot+ -description: Join us on Tuesday, June 18 at 12:00pm EST for a technical deep dive on our Live Context feature, and the security and privacy implications behind it. +title: AMA - Security & Privacy of Long-Term Memory in Pieces Copilot+ +description: Join us on Tuesday, June 18 at 12:00pm EST for a technical deep dive on our Long-Term Memory feature, and the security and privacy implications behind it. displayed_sidebar: docsSidebar --- @@ -8,34 +8,36 @@ import CTAButton from "/src/components/CTAButton"; import SocialIcons from "/src/components/SocialIcons"; import {MiniSpacer} from "/src/components/Spacers"; -# Under the Hood with Pieces: Deep Dive into the Security & Privacy of Live Context in Pieces Copilot+ +# Under the Hood with Pieces: Deep Dive into the Security & Privacy of Long-Term Memory in Pieces Copilot+ > Live Stream Event - Tuesday, June 18, 12:00pm EST -![Live Context Security & Privacy AMA](/ama/live-context-security-and-privacy.png) +![Long-Term Memory Security & Privacy AMA](/ama/live-context-security-and-privacy.png) + +> Note: Long-Term Memory is the new name for Pieces Live Context. You may still see this older name in our videos and documentation. ## 🚀 Event Overview {#event-overview} -Just two days after the [announcement of our Live Context feature](https://www.youtube.com/watch?v=aP8u95RTCGE) in Pieces Copilot+, Microsoft announced their Copilot+ PC with “photographic memory” which “helps you remember things you may have forgotten”. +Just two days after the [announcement of our Long-Term Memory feature](https://www.youtube.com/watch?v=aP8u95RTCGE) in Pieces Copilot+, Microsoft announced their Copilot+ PC with “photographic memory” which “helps you remember things you may have forgotten”. This launch unexpectedly brought many questions about the security and privacy of AI engines at the operating-system level, with many developers and organizations concerned about their sensitive data. -In this AMA (Ask Me Anything) live stream event, we’ll be uncovering the tech behind our Live Context feature powered by the Workstream Pattern Engine (WPE) to showcase our commitment to air-gapped developer experiences, and discuss the benefits of an offline-first approach to using AI to remember the right things, not everything. +In this AMA (Ask Me Anything) live stream event, we’ll be uncovering the tech behind our Long-Term Memory feature powered by the Long-Term Memory Engine (LTME) to showcase our commitment to air-gapped developer experiences, and discuss the benefits of an offline-first approach to using AI to remember the right things, not everything. -It was far from easy to develop this feature without relying on traditional cloud-based recording methods that can lead to security vulnerabilities, but ultimately we were able to create the WPE technology which works across all major operating systems, operates on-device and in real-time for extremely robust security and privacy, avoids network latency, liability of data, and expensive cloud costs, and enables developers to 10x their productivity. +It was far from easy to develop this feature without relying on traditional cloud-based recording methods that can lead to security vulnerabilities, but ultimately we were able to create the LTME technology which works across all major operating systems, operates on-device and in real-time for extremely robust security and privacy, avoids network latency, liability of data, and expensive cloud costs, and enables developers to 10x their productivity. Learn more about our development journey and what this means for your workflow by registering for the live stream! ## 🛠 What You'll Learn {#what-youll-learn} -- **Security & Privacy First:** Discover how Live Context operates entirely on-device, ensuring your workflow data never leaves your computer. -- **Workstream Pattern Engine:** Understand the technology that shadows your workflow, capturing context locally across macOS, Windows, and Linux. -- **Behind the Scenes:** Get insights into the algorithms and models that power Live Context, including intelligent visual snapshots, OCR models, and the summarization & redaction step. +- **Security & Privacy First:** Discover how Long-Term Memory operates entirely on-device, ensuring your workflow data never leaves your computer. +- **Long-Term Memory Engine:** Understand the technology that shadows your workflow, capturing context locally across macOS, Windows, and Linux. +- **Behind the Scenes:** Get insights into the algorithms and models that power Long-Term Memory, including intelligent visual snapshots, OCR models, and the summarization & redaction step. - **Local LLM Execution:** Learn how we leverage on-device LLM runtimes to keep your data private, with no need for cloud-based processing. ## 💡 Why Attend? {#why-attend} - **Interactive Q&A:** Our founder and key engineers will be on hand to answer your questions live. -- **In-Depth Technical Breakdown:** Gain a deeper understanding of how Live Context seamlessly integrates into your development workflow while maintaining top-tier security. +- **In-Depth Technical Breakdown:** Gain a deeper understanding of how Long-Term Memory seamlessly integrates into your development workflow while maintaining top-tier security. - **Community Engagement:** Share your thoughts, feedback, and ideas to help us refine this feature for all developers. ## ❓ How to Participate {#how-to-participate} diff --git a/docs/features/pieces-copilot.mdx b/docs/features/pieces-copilot.mdx index 50dc1f6c5..342268028 100644 --- a/docs/features/pieces-copilot.mdx +++ b/docs/features/pieces-copilot.mdx @@ -75,13 +75,13 @@ These snippets help the Copilot understand the precise syntax and functionality ![Setting Context for the Pieces Copilot](/assets/contextual_copilot.png) -### Live Context -[Live Context](/product-highlights-and-benefits/live-context) comes from the [Workstream Pattern Engine](/product-highlights-and-benefits/live-context#the-workstream-pattern-engine) and enables the copilot to capture real-time context from any application on your desktop, making it aware of your recent activities and work-in-progress journey. This dynamic understanding allows the copilot to provide hyper-personalized responses and suggestions based on your recent workflow. +### Long-Term Memory +[Long-Term Memory](/product-highlights-and-benefits/live-context) comes from the [Long-Term Memory Engine](/product-highlights-and-benefits/live-context#the-long-term-memory-engine) and enables the copilot to capture real-time context from any application on your desktop, making it aware of your recent activities and work-in-progress journey. This dynamic understanding allows the copilot to provide hyper-personalized responses and suggestions based on your recent workflow. -By enabling the Workstream Pattern Engine in the Pieces settings, the copilot can reference your recent activities, discussions, and code changes, tailoring its responses and interactions to your unique project. +By enabling the Long-Term Memory in the Pieces settings, the copilot can reference your recent activities, discussions, and code changes, tailoring its responses and interactions to your unique project. #### Temporal and Conversational Awareness -Live Context allows for a more intuitive and seamless interaction with the copilot, as it can recall and reference recent work contexts and discussions on a time-basis. Whether you're asking about a recent error, a discussion point with a colleague, or to summarize your workflow yesterday, the copilot uses its temporally grounded knowledge to provide relevant and timely assistance. +Long-Term Memory allows for a more intuitive and seamless interaction with the copilot, as it can recall and reference recent work contexts and discussions on a time-basis. Whether you're asking about a recent error, a discussion point with a colleague, or to summarize your workflow yesterday, the copilot uses its temporally grounded knowledge to provide relevant and timely assistance. ### Website Context Website Context allows the Copilot to understand the content of websites. By using the [Pieces Web Extension](/extensions-plugins/web-extension) to add a website to your Copilot context, the system automatically extracts relevant content and integrates this information into the Copilot’s knowledge base. diff --git a/docs/product-highlights-and-benefits/live-context.mdx b/docs/product-highlights-and-benefits/live-context.mdx index bc846717d..6bd0247a2 100644 --- a/docs/product-highlights-and-benefits/live-context.mdx +++ b/docs/product-highlights-and-benefits/live-context.mdx @@ -1,53 +1,53 @@ --- -title: Live Context -description: Powered by our Workstream Pattern Engine, Live Context enables the world’s first Temporally Grounded Copilot that understands your unique workflow +title: Long-Term Memory +description: Powered by our Long-Term Memory Engine, Long-Term Memory enables the world’s first Temporally Grounded Copilot that understands your unique workflow --- import {MiniSpacer} from "/src/components/Spacers"; import Video from "/src/components/Video"; -# Applying Live Context to your Copilot +# Applying Long-Term Memory to your Copilot -Since the original launch of Pieces for Developers, we’ve been laser-focused on developer productivity. We started by giving devs a place to store their most valuable snippets of code, then moved on to proactively saving and contextualizing them. Next, we built one of the first on-device LLM-powered AI copilots in the Pieces Copilot. Now, we’re taking our obsession with contextual developer productivity to the next level with the launch of Live Context within our copilot, making it the world’s first temporally grounded copilot. +Since the original launch of Pieces for Developers, we’ve been laser-focused on developer productivity. We started by giving devs a place to store their most valuable snippets of code, then moved on to proactively saving and contextualizing them. Next, we built one of the first on-device LLM-powered AI copilots in the Pieces Copilot. Now, we’re taking our obsession with contextual developer productivity to the next level with the launch of Long-Term Memory within our copilot, making it the world’s first temporally grounded copilot.