Planned
Reorganizing AI Models in 'All' view
Description: Currently, the arrangement of AI models in the All models menu is scattered. Models from the same provider not grouped together. For example, Meta has models on the top on the list and also at the bottom of the list. This setup can lead to confusion and inefficiency when users are trying to locate specific models. Proposed Change: Reorganising the AI models by sequencing them according to their respective providers. This would entail placing all LLM models in a contiguous block. This will improve navigational efficiency and user experience by making it easier to find and compare models from the same provider.
Johnson Lai 8 months ago
Planned
Reorganizing AI Models in 'All' view
Description: Currently, the arrangement of AI models in the All models menu is scattered. Models from the same provider not grouped together. For example, Meta has models on the top on the list and also at the bottom of the list. This setup can lead to confusion and inefficiency when users are trying to locate specific models. Proposed Change: Reorganising the AI models by sequencing them according to their respective providers. This would entail placing all LLM models in a contiguous block. This will improve navigational efficiency and user experience by making it easier to find and compare models from the same provider.
Johnson Lai 8 months ago
Suggest to add a Update Password UI for users who sign up with Google
For users who sign up with Google, there is no way to update their password except for using the forgot password. It might be good to have add a UI for user to update password.
Engineers 8 months ago
Suggest to add a Update Password UI for users who sign up with Google
For users who sign up with Google, there is no way to update their password except for using the forgot password. It might be good to have add a UI for user to update password.
Engineers 8 months ago
Suggest to add a loading state between the Run and Stop button
I'm testing long prompts around 3.2k tokens using GPT-4 and have wasted quite a few credits.Sometimes the Run button is not very responsive (especially when the prompt is saving or the token size is a bit large). There may be a half to one-second delay before it actually starts running. I accidentally double-click because I think I didn't press the Run button the first time. As a result, the second click actually hits the Stop button.There needs to be a longer delay when changing the Run button to the Stop button. Maybe add a loading state.
Engineers 8 months ago
Suggest to add a loading state between the Run and Stop button
I'm testing long prompts around 3.2k tokens using GPT-4 and have wasted quite a few credits.Sometimes the Run button is not very responsive (especially when the prompt is saving or the token size is a bit large). There may be a half to one-second delay before it actually starts running. I accidentally double-click because I think I didn't press the Run button the first time. As a result, the second click actually hits the Stop button.There needs to be a longer delay when changing the Run button to the Stop button. Maybe add a loading state.
Engineers 8 months ago
To create text based dataset directly without uploading
I'm trying to extract some system prompt and turn it into dataset, wish to have this feature so I don't need to create a text file again on my device and go through the upload process
Yongsheng Lui 9 months ago
To create text based dataset directly without uploading
I'm trying to extract some system prompt and turn it into dataset, wish to have this feature so I don't need to create a text file again on my device and go through the upload process
Yongsheng Lui 9 months ago
Add prompt injection prevention mechanism
We should add in prompt injection prevention mechanism: β’ OpenAI moderation (free) undefined: https://platform.openai.com/docs/guides/moderation/overview β’ Prompt injection prevention from Bittensor SN14 undefined: https://docs.synapsec.ai Johnson got the api key from http://synapsec.ai as well
Johnson Lai 9 months ago
Add prompt injection prevention mechanism
We should add in prompt injection prevention mechanism: β’ OpenAI moderation (free) undefined: https://platform.openai.com/docs/guides/moderation/overview β’ Prompt injection prevention from Bittensor SN14 undefined: https://docs.synapsec.ai Johnson got the api key from http://synapsec.ai as well
Johnson Lai 9 months ago
Making Weave API OpenAI compatible
With weave api being open ai compatible, user can just change their API in just 2 lines, then we can onboard user that used OpenAI quickly. const openai = new OpenAI({ baseURL: "https://pms.chasm.net/api/prompts/execute/44", apiKey: OPENAI_API_KEY, });``` Also with this feature, we can integrate with Langchain easily as well, user can just change model through API Slack Discussion: https://chasm-talk.slack.com/archives/C05PHU191GX/p1715774841176749
Johnson Lai 9 months ago
Making Weave API OpenAI compatible
With weave api being open ai compatible, user can just change their API in just 2 lines, then we can onboard user that used OpenAI quickly. const openai = new OpenAI({ baseURL: "https://pms.chasm.net/api/prompts/execute/44", apiKey: OPENAI_API_KEY, });``` Also with this feature, we can integrate with Langchain easily as well, user can just change model through API Slack Discussion: https://chasm-talk.slack.com/archives/C05PHU191GX/p1715774841176749
Johnson Lai 9 months ago
[feature] Conditional logic, if/else and for loops
https://chasm.featurebase.app/p/conditional-or-logic-block-to-enable-workflow-brances ^ in regards to this post above, we have revisited the need to have built in logic blocks for the workflow page. A proposed design is either having assembly style JUMP-IF/LAND logic, or a encompassing loop. Some samples on how they might look like. The If/Else can simply be a dual branch design, no need for a Case style switch design for now. A design decision we must make is deciding between using another prompt block with execution pathway selection, vs a simply boolean/operator mini-block. I'm leaning towards the former as it will be more flexible and in line with the LLM industry.
John Koh 10 months ago
[feature] Conditional logic, if/else and for loops
https://chasm.featurebase.app/p/conditional-or-logic-block-to-enable-workflow-brances ^ in regards to this post above, we have revisited the need to have built in logic blocks for the workflow page. A proposed design is either having assembly style JUMP-IF/LAND logic, or a encompassing loop. Some samples on how they might look like. The If/Else can simply be a dual branch design, no need for a Case style switch design for now. A design decision we must make is deciding between using another prompt block with execution pathway selection, vs a simply boolean/operator mini-block. I'm leaning towards the former as it will be more flexible and in line with the LLM industry.
John Koh 10 months ago
Dev In Progress
Filter function for core modules
As a user, I want to be able to filter prompts / workflow by status As a uesr, I want to be able to filter datasets by file type
Mei Wei Lim 11 months ago
Dev In Progress
Filter function for core modules
As a user, I want to be able to filter prompts / workflow by status As a uesr, I want to be able to filter datasets by file type
Mei Wei Lim 11 months ago
Dev In Progress
Sorting function for core modules
As a user, I want to be able to sort my prompts, workflow, datasets by: name (alphabetical order or reverse) created date (earliest to latest, or reverse) updated date (latest to earliest, or reverse)
Mei Wei Lim 11 months ago
Dev In Progress
Sorting function for core modules
As a user, I want to be able to sort my prompts, workflow, datasets by: name (alphabetical order or reverse) created date (earliest to latest, or reverse) updated date (latest to earliest, or reverse)
Mei Wei Lim 11 months ago
PRD in Progress
Develop Weave's Platform Tokenizers
Different LLM has different set of tokenizers. To streamline the token calculation method on our platform, it's best to develop our own set of tokenisers. Reference: https://github.com/huggingface/tokenizers
Shuwei Li 11 months ago
PRD in Progress
Develop Weave's Platform Tokenizers
Different LLM has different set of tokenizers. To streamline the token calculation method on our platform, it's best to develop our own set of tokenisers. Reference: https://github.com/huggingface/tokenizers
Shuwei Li 11 months ago
Lock Node that Prevent Draggable
Because nodes position is one of the experience component, sometime user would mis click and nodes and it was drag elsewhere. it quite frustrated for user need to drag it back. especially when all the nodes as planned nicely for different region, therefore the lock feature would helpful for user have OCD
Patrick Lee 11 months ago
Lock Node that Prevent Draggable
Because nodes position is one of the experience component, sometime user would mis click and nodes and it was drag elsewhere. it quite frustrated for user need to drag it back. especially when all the nodes as planned nicely for different region, therefore the lock feature would helpful for user have OCD
Patrick Lee 11 months ago
Select Multiple Nodes
It would be better if we having a drag box that select multiple nodes and that allow user to group it and drag together benchmark: Figma
Patrick Lee 11 months ago
Select Multiple Nodes
It would be better if we having a drag box that select multiple nodes and that allow user to group it and drag together benchmark: Figma
Patrick Lee 11 months ago
Dev In Progress
Option for users to view core modules in grid or list view
By default, users can view the core modules in grid view. Users can have the option to toggle between grid view or list view. Core modules include: Prompt, Workflow and Datast
Mei Wei Lim 11 months ago
Dev In Progress
Option for users to view core modules in grid or list view
By default, users can view the core modules in grid view. Users can have the option to toggle between grid view or list view. Core modules include: Prompt, Workflow and Datast
Mei Wei Lim 11 months ago
Completed
Model Catalog
Before user selects a model to run their prompt, the model catalog can help the users by reading the description of the model to help them better understand of the available model shortlist the LLM models according to their use case and preference searching for the model for the ease of finding their preferred model This model catalog is also considered as part of the expansion plan where Weave will be the aggregator of dAi models
product@chasm.net 11 months ago
Completed
Model Catalog
Before user selects a model to run their prompt, the model catalog can help the users by reading the description of the model to help them better understand of the available model shortlist the LLM models according to their use case and preference searching for the model for the ease of finding their preferred model This model catalog is also considered as part of the expansion plan where Weave will be the aggregator of dAi models
product@chasm.net 11 months ago
Add a blog section to the Weave Landing Page
Suggest to incorporate a blog section into the Weave Landing Page by utilizing a third-party tool to embed Medium articles into our own blog post section on the Weave website. Alternatively, we could consider developing our own blog section. The primary motivation behind this suggestion is to enhance Weave's SEO performance. Currently, when users search for articles on our Medium platform, it does not effectively drive traffic to our main website. For instance, a search for "Chain of Thoughts" yields results that prominently feature our competitor Vellum's website, effectively diverting user traffic to Vellum platform. By adding a blog section to the Weave Landing Page, we can improve our visibility in search results. Third party embedding tools: https://dropinblog.com/
Shuwei Li 12 months ago
Add a blog section to the Weave Landing Page
Suggest to incorporate a blog section into the Weave Landing Page by utilizing a third-party tool to embed Medium articles into our own blog post section on the Weave website. Alternatively, we could consider developing our own blog section. The primary motivation behind this suggestion is to enhance Weave's SEO performance. Currently, when users search for articles on our Medium platform, it does not effectively drive traffic to our main website. For instance, a search for "Chain of Thoughts" yields results that prominently feature our competitor Vellum's website, effectively diverting user traffic to Vellum platform. By adding a blog section to the Weave Landing Page, we can improve our visibility in search results. Third party embedding tools: https://dropinblog.com/
Shuwei Li 12 months ago
Multiple credit purchase per transcation/payment
Allow user to purchase credit package n times per payment or transaction For example getting $120 worth of credits by purchasing the $30 package in 4x quantity.
Yongsheng Lui 12 months ago
Multiple credit purchase per transcation/payment
Allow user to purchase credit package n times per payment or transaction For example getting $120 worth of credits by purchasing the $30 package in 4x quantity.
Yongsheng Lui 12 months ago
Domain name & IP whitelist
Need this whitelist feature for each endpoint due to security concerns. Suggest to have a global settings that apply to all endpoint but at the same time also allow override for each individual prompt/workflow
Yongsheng Lui 12 months ago
Domain name & IP whitelist
Need this whitelist feature for each endpoint due to security concerns. Suggest to have a global settings that apply to all endpoint but at the same time also allow override for each individual prompt/workflow
Yongsheng Lui 12 months ago