Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Machine learning-guided protein engineering
P Kouba, P Kohout, F Haddadi, A Bushuiev… - ACS …, 2023 - ACS Publications
Recent progress in engineering highly promising biocatalysts has increasingly involved
machine learning methods. These methods leverage existing experimental and simulation …
machine learning methods. These methods leverage existing experimental and simulation …
14 examples of how LLMs can transform materials science and chemistry: a reflection on a large language model hackathon
KM Jablonka, Q Ai, A Al-Feghali, S Badhwar… - Digital discovery, 2023 - pubs.rsc.org
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists.
Recent studies suggested that these models could be useful in chemistry and materials …
Recent studies suggested that these models could be useful in chemistry and materials …
Qwen technical report
Large language models (LLMs) have revolutionized the field of artificial intelligence,
enabling natural language processing tasks that were previously thought to be exclusive to …
enabling natural language processing tasks that were previously thought to be exclusive to …
Next-gpt: Any-to-any multimodal llm
While recently Multimodal Large Language Models (MM-LLMs) have made exciting strides,
they mostly fall prey to the limitation of only input-side multimodal understanding, without the …
they mostly fall prey to the limitation of only input-side multimodal understanding, without the …
Multimodal foundation models: From specialists to general-purpose assistants
Neural compression is the application of neural networks and other machine learning
methods to data compression. Recent advances in statistical machine learning have opened …
methods to data compression. Recent advances in statistical machine learning have opened …
Dora: Weight-decomposed low-rank adaptation
SY Liu, CY Wang, H Yin, P Molchanov… - … on Machine Learning, 2024 - openreview.net
Among the widely used parameter-efficient fine-tuning (PEFT) methods, LoRA and its
variants have gained considerable popularity because of avoiding additional inference …
variants have gained considerable popularity because of avoiding additional inference …
Llama-vid: An image is worth 2 tokens in large language models
In this work, we present a novel method to tackle the token generation challenge in Vision
Language Models (VLMs) for video and image understanding, called LLaMA-VID. Current …
Language Models (VLMs) for video and image understanding, called LLaMA-VID. Current …
On evaluating adversarial robustness of large vision-language models
Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented
performance in response generation, especially with visual inputs, enabling more creative …
performance in response generation, especially with visual inputs, enabling more creative …
Shapellm: Universal 3d object understanding for embodied interaction
This paper presents ShapeLLM, the first 3D Multimodal Large Language Model (LLM)
designed for embodied interaction, exploring a universal 3D object understanding with 3D …
designed for embodied interaction, exploring a universal 3D object understanding with 3D …
Instructeval: Towards holistic evaluation of instruction-tuned large language models
Instruction-tuned large language models have revolutionized natural language processing
and have shown great potential in applications such as conversational agents. These …
and have shown great potential in applications such as conversational agents. These …