30 days of JavaScript - Part 3 of 5

November 25, 2019

Here is the promised third part to JS30 Challenge, Day 16 through to 20.

Again, all my solutions are accessible on my CodePen in case you wanted to explore the context for some of these exercises in more detail.

Let’s kick it off, shall we?!

Day 16: Mousemove shadow

This lesson provides a few insights into tracking the moves of the mouse on the screen and how to apply shadow over any given element when we hover over it.

shadowed text

We will learn about various properties:

💥 offsetX 💥 offsetY 💥 offsetLeft 💥 offsetTop 💥 style.textShadow

HTML is very basic for this project, just one parent element (‘.hero’) and its child h1.

As we got so used to by now, before we can apply any JS to our element, we first have to grab them with the query selectors. And so we do that for both the ‘.hero’ element and ‘text’ element.

Then, we listen to the mousemove event, and when that fires off we run the function to add shadow when hovering over our ‘.hero’ element.

code

There are a few things to particularly pay attention to with this solution.

Firstly, we take the width and height of our hero element. Please note ES6 destructuring:

es6 example

Secondly, we grab the current position of our cursor by using offsetX, offsetY

Thirdly, this is always bound to the element we are listening to the event on. In our case this is our hero element always, but the event target (the thing that actually triggered the event) may change and can also be its child - h1 element. Although there is a child element inside of the hero, we are getting x and y of the element we hovered over. That is why we check whether the target is hero element. If not, we need to add the extra offsets from the h1 element with some extra normalization.

Lastly, we need to calculate our shadow’s position, how far at its most it should be stretched. We create variable called walk and give it 100px. If 100px is our walk, then 50px is as high as we go, and -50px is as low as we should go. (Going from -50 to 50).

Finally, we just add textShadow to our text element with the values we calculated earlier.

Day 17: Sort Band Names Without Articles

This a great little exercise to get your brain all warmed-up in the morning 😆

The task at hand is to sort the names of all the bands in the array and then display them as the unordered list items to the HTML element already given.

What is important, we need to do so without taking the articles into consideration (“The”, “An”, “A”). Another gotcha, once sorted we still need to display the articles in the name of the bands.

I approached the task completely ignoring the second requirement and filtered out all these articles first, but then realised it did made sense to display the bands names in full as they are! 😁😁😁

Instead we come up with a little regex to replace the articles with empty string and when sorting our array we apply that strip function on each of the elements, in that way we still get the band names sorted, but we do not modify the original name. Access the code here

solution

Day 18: Adding Up Times With Reduce

This is another great exercise to get us use one of the most common array function, reduce.

The goal is to sum up the total time a sequence of video will take us to watch.

Each video is a part of unordered list, and has data-time attribute:

                <li data-time="1:59">
                    Video 42
                </li>

We start by extracting all the list elements by this attribute.

                const timeNodes = Array.from(document.querySelectorAll('[data-time]'));

And then we can map over the array created from the NodeList, extracting the time from dataset.

We want to make sure the time we provide is accurate, so we split the time code into minutes and seconds, and convert the value to seconds in order to add them all up together.

                const seconds = timeNodes
                    .map(node => node.dataset.time)
                    .map(timeCode => {
                        const [mins, secs] = timeCode.split(':').map(parseFloat);
                        return (mins * 60) + secs;
                    })
                    .reduce((total, vidSeconds) => total + vidSeconds);

Once we’ve established the number of total seconds, we want to convert that to hours and minutes.

                let secondsLeft = seconds;
                const hours = Math.floor(secondsLeft / 3600);
                secondsLeft = secondsLeft % 3600;

                const mins = Math.floor(secondsLeft / 60);
                secondsLeft = secondsLeft % 60;

And that’s how we get the final result: console.log(hours, mins, secondsLeft);

reduce

Day 19: Webcam with canvas fun

In this lesson we are building a photobooth using video and canvas. We get the video from our webcam, and then type it into a canvas element, so that we can do all sorts of fun things with it, like take a photo, download it and add some cool effects.

In order to get to see yourself in a frame on the website, we can just use the MediaDevices interface.

It provides access to connected media input devices like cameras and microphones, as well as screen sharing. In essence, it lets you obtain access to any hardware source of media data.

                    function getVideo() {
                        navigator.mediaDevices.getUserMedia({ video: true, audio: false })
                            .then(localMediaStream => {
                                console.log(localMediaStream);
                                video.src = window.URL.createObjectURL(localMediaStream);
                                video.play();
                            })
                            .catch(err => {
                                console.error(`OH NO!!!`, err);
                            });
                    }

It is important to run your own secure server before we start due to the security restrictions accessing a user’s webcam.

When the video is played, it is going to emit canPlay event that we listen to and then we fire off paintToCanvas function.

                    video.addEventListener('canplay', paintToCanavas);

Once you get to the stage where you see yourself in the webcam in the upper right hand side corner, you can inspect the video element. You will notice that the video source is a blob, the raw data being transmitted in and off the webcam:

blob

Next thing is to take a frame from the video and paint it onto the canvas element on the screen in regular intervals using canvas’s function of drawImage(). Once successful with that, you will see yourself double on the screen 😆😆

And you can move onto taking photos by using another function of canvas toDataURL(), which is just base64 representation of your photo. We can then create a link with this data and prepend this to the strip element. At the same time creating the downloadable jpeg too that you can append to the button ‘onClick’ function:

                    link.setAttribute('download', 'beauty'); 
                    link.innerHTML = `<img src="${data}" alt="Beautiful Woman" />`;)


                    <button onClick="takePhoto()">Take Photo</button>

Last thing is to add some filters onto our canvas photos, such as making Wes go red with anger:

red effect

For more details as to what fun effects can be added, check out Wes’s video.

Day 20: Speech Detection

In this lesson we are learning about speech recognition in the browser. For this lesson we need a local server that can access the microphone. Up until this video, I didn’t have a clue such a global variable even existed:

                    window.SpeechRecogniton || window.webkitSpeechRecognition
                    const recognition = new SpeechRecognition();
                    recognition.interimResults = true;

                    let p = document.createElement('p');
                    const words = document.querySelector('.words');
                    words.appendChild(p);

We start by creating new instance of speech recognition and set the interim results to true, so that the browser updates and displays the text as we speak. Then, we create a paragraph element that is going to get attached to the “words” div on the screen.

In order for any of that to work, we need to create a speech recognition event listener for ‘results’. As you speak, some Speech Recognition events are being fired off. We can inspect these results closer and we will notice that it is a list where each item contains transcript, confidence and isFinal properties.

results

As the event is being fired off every time we speak, we create an array out of the results list and map over it to create the transcript string that we can then add as a textContext to our paragraph element. We need to make sure we do not overwrite each paragraph after every sentence. For that, we can check if the result is final and if so, we can create a new paragraph for each new sentence as we speak.

final code

That’s it for Part 3 folks. Hopefully, you found this post useful and learnt a thing or two.