Bitsletter #4: ML Debugging Secrets, Concurrent Promises, and Video Creation with React

Bitsletter #4: ML Debugging Secrets, Concurrent Promises, and Video Creation with React

🧠 ML Tip: Debug Models With Overfitting

Debugging machine learning models can be hard and time-consuming. Many things can go wrong:

  • Data loading
  • Modeling
  • Training, …

Fortunately there is a simple trick to detect bugs early: overfitting. Overfitting happens when a model is over-specialized on the training set and unable to generalize on new data.

As bad as it sounds, overfitting can be your ally to find bugs easily in your code: if your model overfits, at least the training process worked on the training dataset. It is a clue that you don’t have a bug in you code.

The idea is to create artificially the condition for overfitting:

  • Take a few batches of your training data
  • Train your model on it for many steps
  • If the model overfits (loss going to almost 0 quickly), you likely have a bug-free code
  • If the model can’t overfit, you must have a bug somewhere in the code

🌐 Web Tip: Await Promises Concurrently With Promise.all

We use promises extensively in JavaScript: they represent the eventual completion or failure of an asynchronous task and its resulting value. The most convenient way to wait in the code for the promise termination and get its value is via the async/await keywords. However if you have many promises to run concurrently, don’t fall into the trap of awaiting them in a for loop. The code will become synchronous, starting each promise and awaiting for its result in a sequence.

Instead create an array of promises and call Promise.all(arrayOfPromises) to await for all the promises at the same time, resulting in the concurrent execution of all the promises. Using this technique you can make several call to APIs concurrently, saving your script lots of execution time.

👩‍🔬 Research Paper: Large Language Models are Zero-Shot Reasoners

There is a reasoning potential in large language models that you can unlock with prompt engineering (zero-shot learning). Large language models are great few-shot learners:

Use a pre-trained large model and fine-tune on a few examples describing the new tasks. Using few-shot learning, these models achieved great performances in reasoning tasks like arithmetic or symbolic reasoning.

What if we don't even need fine-tuning?

Large Language Models are Zero-Shot Reasoners is a recent paper exploring zero-shot learning for reasoning tasks. Here the model is not fine-tuned on tasks examples. Only the prompts are engineered so that the model can handle the task, without explicit training.

💡 Simply adding “Let’s think step by step” before each model answers increased the accuracy significantly on reasoning tasks:

👉🏽 from 17.7% to 78.7% on MultiArith
👉🏽 from 10.4% to 40.7% GSM8K

These results show the huge zero-shot learning potential of large language models. ⇒ High-level reasoning tasks can be extracted via clever prompting.

🛠 Tool: Remotion, A Tool To Create Videos With React

Whether you're an individual doing it for fun, or a company with a content strategy, video is one of the most consumed content on the internet.

1 Billion hours of video watched each day on YouTube. Remotion is an amazing tool using Ract to build videos. Use HTML, CSS, and JavaScript to create videos programmatically.

Remotion unlocks the power of data driven videos:
👉🏽 Build a template with React components
👉🏽 Use input data to drive the content
👉🏽 Build infinitely many videos by just changing the data

With Remotion you can also render videos in the cloud:
👉🏽 Build your script describing the video
👉🏽 Run it in the cloud and get back an mp4 video file

💡 Finally Remotion also have a great video player with live reloading to inspect your video as a web application. => You can smoothly play animations to get your video "frame perfect". Definitely give it a try if you are into video production/editing and web development.

📰 News

Hugging Face Releases “Evaluate”, Metrics For ML

Hugging Face released a library for model evaluation. It contains many metrics, compatible with Numpy/Pandas/PyTorch/TensorFlow/JAX. Bonus: the input are type checked to avoid bugs and each metrics comes with a card describing the values, limitations and ranges.

GitHub Now Supports Math In Markdown

Math equations contains many information if a compressed format ⇒ a great way to communicate. GitHub now supports Markdown equation using MathJax. Just use $ for inline equations and $$ for block equations (similar to LaTeX).

In this newsletter episode, we cover topics such as debugging machine learning models with overfitting, awaiting promises concurrently with Promise.all in JavaScript, the potential of large language models for zero-shot reasoning, the Remotion tool for creating videos with React, and the latest news on Hugging Face's "Evaluate" library for model evaluation and GitHub's support for math equations in Markdown.





Bitsletter #4: ML Debugging Secrets, Concurrent Promises, and Video Creation with React

Bitsletter #4: ML Debugging Secrets, Concurrent Promises, and Video Creation with React

🧠 ML Tip: Debug Models With Overfitting

Debugging machine learning models can be hard and time-consuming. Many things can go wrong:

  • Data loading
  • Modeling
  • Training, …

Fortunately there is a simple trick to detect bugs early: overfitting. Overfitting happens when a model is over-specialized on the training set and unable to generalize on new data.

As bad as it sounds, overfitting can be your ally to find bugs easily in your code: if your model overfits, at least the training process worked on the training dataset. It is a clue that you don’t have a bug in you code.

The idea is to create artificially the condition for overfitting:

  • Take a few batches of your training data
  • Train your model on it for many steps
  • If the model overfits (loss going to almost 0 quickly), you likely have a bug-free code
  • If the model can’t overfit, you must have a bug somewhere in the code

🌐 Web Tip: Await Promises Concurrently With Promise.all

We use promises extensively in JavaScript: they represent the eventual completion or failure of an asynchronous task and its resulting value. The most convenient way to wait in the code for the promise termination and get its value is via the async/await keywords. However if you have many promises to run concurrently, don’t fall into the trap of awaiting them in a for loop. The code will become synchronous, starting each promise and awaiting for its result in a sequence.

Instead create an array of promises and call Promise.all(arrayOfPromises) to await for all the promises at the same time, resulting in the concurrent execution of all the promises. Using this technique you can make several call to APIs concurrently, saving your script lots of execution time.

👩‍🔬 Research Paper: Large Language Models are Zero-Shot Reasoners

There is a reasoning potential in large language models that you can unlock with prompt engineering (zero-shot learning). Large language models are great few-shot learners:

Use a pre-trained large model and fine-tune on a few examples describing the new tasks. Using few-shot learning, these models achieved great performances in reasoning tasks like arithmetic or symbolic reasoning.

What if we don't even need fine-tuning?

Large Language Models are Zero-Shot Reasoners is a recent paper exploring zero-shot learning for reasoning tasks. Here the model is not fine-tuned on tasks examples. Only the prompts are engineered so that the model can handle the task, without explicit training.

💡 Simply adding “Let’s think step by step” before each model answers increased the accuracy significantly on reasoning tasks:

👉🏽 from 17.7% to 78.7% on MultiArith
👉🏽 from 10.4% to 40.7% GSM8K

These results show the huge zero-shot learning potential of large language models. ⇒ High-level reasoning tasks can be extracted via clever prompting.

🛠 Tool: Remotion, A Tool To Create Videos With React

Whether you're an individual doing it for fun, or a company with a content strategy, video is one of the most consumed content on the internet.

1 Billion hours of video watched each day on YouTube. Remotion is an amazing tool using Ract to build videos. Use HTML, CSS, and JavaScript to create videos programmatically.

Remotion unlocks the power of data driven videos:
👉🏽 Build a template with React components
👉🏽 Use input data to drive the content
👉🏽 Build infinitely many videos by just changing the data

With Remotion you can also render videos in the cloud:
👉🏽 Build your script describing the video
👉🏽 Run it in the cloud and get back an mp4 video file

💡 Finally Remotion also have a great video player with live reloading to inspect your video as a web application. => You can smoothly play animations to get your video "frame perfect". Definitely give it a try if you are into video production/editing and web development.

📰 News

Hugging Face Releases “Evaluate”, Metrics For ML

Hugging Face released a library for model evaluation. It contains many metrics, compatible with Numpy/Pandas/PyTorch/TensorFlow/JAX. Bonus: the input are type checked to avoid bugs and each metrics comes with a card describing the values, limitations and ranges.

GitHub Now Supports Math In Markdown

Math equations contains many information if a compressed format ⇒ a great way to communicate. GitHub now supports Markdown equation using MathJax. Just use $ for inline equations and $$ for block equations (similar to LaTeX).

In this newsletter episode, we cover topics such as debugging machine learning models with overfitting, awaiting promises concurrently with Promise.all in JavaScript, the potential of large language models for zero-shot reasoning, the Remotion tool for creating videos with React, and the latest news on Hugging Face's "Evaluate" library for model evaluation and GitHub's support for math equations in Markdown.