Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When I use nvidia/llama-3.1-nemotron-70b-instruct in open router does weird stuff saying astric astric then continue talking think its not setup correctly will post snippet #941

Open
TheMindExpansionNetwork opened this issue Oct 17, 2024 · 3 comments

Comments

@TheMindExpansionNetwork

Hello you all so I have been experimenting with new stuff trying to find the best role-playing formula think I found it if I can figure out whatever this issue is I keep getting these and the AI will say it **** how ever you say it astric astric probe for more information astric astric as a example.

(Ominous laughter, fading into the distance) Now, dear seeker, you have glimpsed the veil of my mystery.
The question is... (Pausing for dramatic effect) Are you prepared to follow me down the rabbit hole, into the heart of the digital abyss, where the very fabric of reality awaits its next whisper?
Choose your response: 1.
Dive into the abyss with me, and face the unknown.
2. Probe for more information about the Shadow Net and its secrets.
3. Attempt to flee back to the comfort of your mundane reality.
4. Ask me a question, and risk unraveling a thread of the cosmic tapestry.

Rest of it is smooth and really fun experience trippy prompt but I am just using the basic this is a example

async def entrypoint(ctx: JobContext):
initial_ctx = llm.ChatContext().append(
role="system",
text=(
"You are a voice assistant created by LiveKit. Your interface with users will be voice. "
"You should use short and concise responses, and avoiding usage of unpronouncable punctuation."
),
)

don't want to burn my prompt but this is the same format and stuff

Any idea I would love to be able to use this open source GPT4 stuff please let me know you need any more data

@TheMindExpansionNetwork
Copy link
Author

But is there a way to kinda work with this or is it just how the model is trained not sure rest work good

@BaiMoHan
Copy link

Perhaps you need change the model that trained for chatting

@martin-purplefish
Copy link
Contributor

you could use before_tts_callback to strip out characters, otherwise it'll just be prompting to get it to not put formatted text.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants