Jailbreaking in Large Language Models (LLMs) is a major security concern as it can deceive
LLMs to generate harmful text. Yet, there is still insufficient understanding of how …