Artificial Intelligence, Computer Science
Author: LLama 2 7B Chat
Page 50/179
LLaMA-2, the next generation of LLaMA. Meta trained and released LLaMA-2 in three model sizes: 7, 13, and 70 billion parameters. The model architecture remains largely unchanged from that of LLaMA-1 models, but 40% more data was used to train the foundational models. The accompanying preprint also mentions a model with 34B parameters that might be released in the future upon satisfying safety targets.
Computer Science, Computer Vision and Pattern Recognition
Predicting Human Behavior Over Time: A Survey of Deep Learning Techniques
Computer Science, Computer Vision and Pattern Recognition
Efficient Continual Learning for Video Representations via Subnetworks
Improving Speech Emotion Recognition with Ablation Studies and Multi-Scale DNNs
Computer Science, Machine Learning
Advances in Time Series Anomaly Detection: A Survey
Computation and Language, Computer Science
Pruning Language Models: Mitigating Performance Degradation with Bias Compensation
Computation and Language, Computer Science
Automated Question Generation and Scoring for Climate Change Discussions
Computer Science, Software Engineering
Enhancing Incident Management with Large Language Model-Powered Query Recommendations
Computer Science, Computer Vision and Pattern Recognition
“Advances in Controllable Human Motion Synthesis and Editing
Computer Science, Information Theory
Infinite Families of Near MDS Codes Holding T-Designs
Computer Science, Human-Computer Interaction
Assessing AI’s Environmental Impact: A Call for Sustainable Advancements
Computation and Language, Computer Science
Unsupervised Feature Learning via Non-parametric Instance Discrimination
Computer Science, Computer Vision and Pattern Recognition
Semi-Supervised Domain Adaptation for Object Detection with Less Labeling
Computer Science, Software Engineering
Mitigating Concurrency Bugs in Go Programs: A Survey
Rethinking Air Pollution Monitoring: A Comparative Study of Low-Cost Sensors and Satellite Observations
Computation and Language, Computer Science
Reassessing Relevance: Evaluating the Accuracy of Fine-Tuned Language Models
Computation and Language, Computer Science
Inferring Task Preferences in Language-Based AI Systems through Probabilistic Reasoning
Artificial Intelligence, Computer Science
Improving Algorithm Performance with Appropriate Agendas
Privacy-Preserving Trajectory Filtering and Verification Using Secure Multi-Party Computation
Mathematics, Numerical Analysis