• Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations
for deep learning in NLP. ACL Proceedings, 57, 3645–3650.
• Schwartz, R., Dodge, J., Smith, N. A., & Etzioni, O. (2020). Green AI.
Communications of the ACM, 63(12), 54–63.
• Henderson, P., et al. (2020). Towards the systematic reporting of the energy and
carbon costs of machine learning. Journal of Machine Learning Research, 21, 1–43.
• Han, S., Mao, H., & Dally, W. (2015). Deep compression: Compressing deep neural
networks with pruning, trained quantization, and Huffman coding. ICLR Proceedings.
• Jacob, B., et al. (2018). Quantization and training of neural networks for efficient
integer-arithmetic inference. CVPR, 2704–2713.
• Jouppi, N. P., et al. (2021). Google TPUv4: Scaling energy-efficient AI infrastructure.
IEEE Micro, 41(5), 17–29.
• Patterson, D., Gonzalez, J., & Dean, J. (2022). Carbon emissions and large neural
network training. arXiv preprint arXiv:2104.10350.
• Roy, K., Jaiswal, A., & Panda, P. (2019). Energy-efficient neuromorphic computing.
Nature, 575(7784), 607–617.
• Wu, J., Xu, Y., & Li, S. (2021). EfficientNet revisited: Redesigning architecture for
resource efficiency. IEEE Transactions on Neural Networks and Learning Systems,
32(12), 5480–5493.
• Zhou, Y., & Han, S. (2020). Hardware-aware neural architecture search for efficient
inference. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(12),
2970–2983.
• Tang, Y., & Pan, Y. (2021). Energy-efficient deep learning for edge AI. IEEE Internet
of Things Journal, 8(8), 6763–6772.
• Gupta, S., & Sharma, R. (2022). Carbon-neutral AI: Pathways toward sustainable
intelligence. Sustainability, 14(3), 1137.
• Sun, X., & Lin, T. (2021). Green data centers for AI workloads. IEEE Access, 9,
105234–105247.
• Zhang, W., & Zhao, H. (2023). Carbon accounting frameworks for machine learning.
Frontiers in Artificial Intelligence, 6, 113890.