Skip to content

Conversation

@Jerry-Kon
Copy link
Contributor

add feature about local chatglm api
add the relevant test program

@@ -0,0 +1,13 @@
import sys
sys.path.append('/home/jerry_kon/my_chatglm/ChatLLM')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this.

@@ -0,0 +1,13 @@
import sys
sys.path.append('/home/jerry_kon/my_chatglm/ChatLLM')
from apis.api import ChatLLM
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Format this with python lint tools.

import sys
sys.path.append('/home/jerry_kon/my_chatglm/ChatLLM')
from apis.api import ChatLLM
chat = ChatLLM(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for unit test, this is not available because we have no model in this project. I'm still figuring out the proper approach to solve this, so currently, just remove this test.

@@ -0,0 +1,13 @@
import sys
sys.path.append('/home/jerry_kon/my_chatglm/ChatLLM')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this.

llms/chatglm.py Outdated
@@ -0,0 +1,22 @@
from transformers import AutoTokenizer, AutoModel

#if not history:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this, it's ugly.

llms/chatglm.py Outdated
self.tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
self.model = AutoModel.from_pretrained(model_name_or_path, trust_remote_code=True).half().cuda()
self.model = self.model.eval()
def completion(self,system_prompt, user_prompt):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Format these codes plz.

llms/chatglm.py Outdated
@@ -0,0 +1,20 @@
from transformers import AutoTokenizer, AutoModel

class ChatglmChat:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should inherit from APIChat

llms/chatglm.py Outdated
Comment on lines 8 to 12
self.model = (
AutoModel.from_pretrained(model_name_or_path, trust_remote_code=True)
.half()
.cuda()
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.model = (
AutoModel.from_pretrained(model_name_or_path, trust_remote_code=True)
.half()
.cuda()
)
self.model = AutoModel.from_pretrained(model_name_or_path, trust_remote_code=True)
.half()
.cuda()

@@ -0,0 +1,11 @@
from apis.api import ChatLLM
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

format this

@@ -0,0 +1,11 @@
from apis.api import ChatLLM
chat = ChatLLM(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please refer to python test about how to write a Test. Basically, it should be

Class TestXXX():
    pass
    def test_xxx():
        pass

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this to the top-level ChatLLM/integration_tests/ which means you should create a new folder.

Also add a test for chatGLM

)
self.model = self.model.eval()

def completion(self, system_prompt, user_prompt):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a warning here if we do provide the system_prompt.

@kerthcet kerthcet merged commit 3d4f31b into InftyAI:main Sep 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants