01 Feb 2021 - christopher
In 2019 the paper “Energy and Policy Considerations for Deep Learning in NLP” (Strubell/ Ganesh/ McCallum 2019) popped up, discussing machine learning models’ carbon footprint. Giving this as a portion of food for thought, the community starts thinking about the long-term effects and consequences.
The CodeCarbon 💨 project is a software package to track the carbon footprint. This package is already integrated into Comet ☄️, a tool to analyze and track your models.
To exemplify the use of CodeCarbon 💨, I used a part of code from this and HuggingFace’ notebook to define a simple task for fine-tuning a language model (if you want, you can try out any other task). Check out the notebook.
Happy responsible researching!