Skip to content

Welcome to My Blog

If you're looking for my portfolio website visit: ben-mcleod.com.

There isn't much here yet in all honesty - and that's probably a good thing - it's not like the internet needs another blog. My plan is to keep this a useful-ish place, full of write ups and guides for things I get up to in my spare time, whether that be Raspberry Pi projects, Penetration Testing labs, cool tech I've messed around with... You get the picture.

Recent Posts:

Deep Fakes at Home?

AI Demo

So, What is it?

What you're looking at is "Deep Live Cam" the brainchild of hacksider.
It's a program that uses deep learning models that run locally to faceswap your face either in static pictures, or, and this is the cool (or dangerous part) through your webcam LIVE.

You can see in the demo above hacksider has swapped his face with that of Elon Musk, and this will run on any computer with a modern CPU and even better on a computer with a decent GPU (NVIDIA 30 series and above).

So, as you can imagine, the gamer in me with an RTX 3080 and an Intel i5 12700K jumped at the chance to give this a try...

Ben as The Hulk

As you can see, it's pretty funny to live out your childhood fantasies and see what you would look like as your favorite childhood super hero, but more interesting to me was just how easy it was to get this software installed, and, how scary technology like this may become in the future. For the past 100 years, photo - and even more so - video evidence has been the gold standard in society, but now? Now the lines are blurred, perhaps forever.

Installation

Once all the prerequisites like chocolatey, python, git, ffmpeg, nvidia cuda etc are installed, the entire software can be downloaded to a folder on your computer in a matter of seconds, and span up with one simple command:

python run.py --execution-provider cuda

Once open, you'll get a screen like the one below, which is extremely simple to operate: Deep Cam Live UI

Final Thoughts

I don't really have anything profound to say about the tool if I'm honest.
I recall about 5 years ago NVIDIA released their own version of software like this which could be downloaded by anyone publicly and it blew the world away. It was very quickly taken down and only those that had downloaded copies could still run it. As technology advances, it looks like solo developers are now capable of developing tools that just 5 years ago, a behemoth like NVIDIA kept closely guarded.

What remains to be seen now is whether tools like this, especially their future, more developed versions, change the world as we know it, where video proof is no longer to be believed. Or, like photoshopping before it, will Deep Fakes become something that the human eye/brain evolves alongside. With the earliest versions fooling all of us, but as we learn what to look out for and what quirks Deep Fakes tend to have, will the trained eye be able to spot them a mile away?
Either way, it's an interesting time to be in tech!

The Lab!

Ben's HomeLab

What Is All That Stuff?

From left to right I'm using a Synology DS420j NAS, with 10TB of storage in Synology Hybrid Raid offering one disk redundancy. This NAS runs:

  • Multiple file shares for media streaming (using DNLA to stream to any device in the house), computer backups, and photo backups.
  • Plex Server (experimental, it may replace the DNLA stuff)
  • Various Docker containers

We then have 3 x HP Mini PC's which are clustered together in Proxmox, allowing me to pool their processing power, ram and storage together to create one big super computer (sort of). So far, the HP cluster is running:

  • Home Assistant to allow me to run my smart home completely independently from Apple, Google or Amazon.
  • A ParrotOS linux VM so that I can hop on and do some HackTheBox challenges whenever I feel like it.
  • PiHole to block pesky advertisements.

And honestly... That's it so far. I've been looking at a bunch of cool projects to potentially undertake but for now I'm just enjoying having a HomeLab to experiment with and having it all not broken.

Having the ParrotOS VM up and running whenever I want it is especially nice. So often I wanted to experiment with something in Linux or do a quick challenge on HackTheBox and I couldn't be bothered dealing with VMWare or VirtualBox which would inevitably force me to update before I could do anything, but now I just have a VM running whenever I want it!

The HomeAssistant has also been awesome. I've managed to create a neat dashboard which tracks the weather around our local area, shows me all my home security cameras in one place, and even allows me to control lights in all of our rooms (once I get around to adding the smart bulbs that is).

HomeLab!

What On Earth is a HomeLab?

In short, a HomeLab is... Whatever you want it to be! Thanks, cya!

Alright, not very helpful. A HomeLab is one or more computers in your home used for stuff. You could have a Network Attached Storage (NAS) device, holding all your favorite movies and tv shows, or a home assistant monitoring the temperature of your house, turning your lights off and on, or even feeding your cat, you could have a little Raspberry Pi setup as an adblocker so you never have to see an advert ever again - the possibilities are endless.

Where To Start?

That was the question.

Raspberry Pi's

I already had a Raspberry Pi laying around, and there are endless things you can do with a Pi, but they don't have much processing power and are pretty limited on RAM. I could cluster them, but that would mean purchasing 3 or 4 Pi's and at around £55 for a 4GB model at the time of writing, the costs adds up fast! There are 8GB models available for £80 which when clustered would offer me enough RAM to do some interesting things, but then I'm constrained by the small CPU on the Pi, so the cost benefit analysis doesn't look great here. Time to look at other options.

A Raspberry Pi

Repurpose a Desktop PC

I shot this idea down almost as quickly as it popped into my head. The one issue with a desktop PC is the power it uses. I can pack all the bells and whistles into it but it's going to absolutely drink power from the wall. Not to mention, I ideally want to get into virtualisaion and clustering, so putting all my eggs into one basket with a monolithic desktop PC doesn't give me any redundancy if it were to fail. Onto the next one.

Desktop PC Server

Mini PC's

Since I want more of a fully fledged PC, with a somewhat decent CPU and a solid amount of RAM, mini PC's seem to fit the bill. They pack everything I want into a small package whilst also keeping power consumption down to a range not too dissimilar to a Raspberry Pi. In addition to this, there are tons of companies that use these mini PC's almost for the same reasons I want them. They pack enough of a punch to get the job done whilst keeping the energy costs down. So, companies use them for desktop PC's. Since most companies with a decent IT team will refresh their devices every 3-6 years, there are plenty of mini PC's on the market with 7th gen intel processors in them packed with 16GB of RAM, and they can be had from £50-£90 depending on how savvy the eBay seller is. This essentially means you're getting a faster CPU than a Pi, more ram (which is upgradeable) and they come with an SSD inside them rather than the microSD card of the Pi.

Mini PC Cluster

The Oopsie

In my excitement to bag the good deal I saw on eBay, I (stupidly) assumed these mini PC's would have quad core processors. They don't. They have 2. Given that quad core processors have been pretty common place since 2007, I thought it was a safe assumption that the CPUs in my mini PC's (originally released in 2017) would be quad core. I suppose being a mini PC cooling a quad core CPU would be more of a challenge, so to manage heat they slacked on cores. Boo.

Not a problem, that's where Overcommitment comes to the rescue - sort of. Since I'm not running machine learning models, gaming, or doing anything super intensive on these computers, their cores won't be utilized all the time anyway. This means I can allocate the same CPU core to two or more different virtual machines and they can both use it to complete their tasks. Think of it like hiring a chef, they're only 1 person but since they aren't going to be cooking all of the time they can clean the kitchen too! 2 jobs, 1 person. But, just like with our Chef, if the kitchen suddenly got swamped with orders there would be no time for his second job of cleaning, this is the same with overcommitting our CPU's. If I run too much there just wont be enough juice to go around. But, as long as I'm smart about it, this shouldn't be an issue.

Sustainability

During this project, the sustainability angle is one I wanted to look at. Sure I could go out and buy whatever I wanted and build the perfect HomeLab from the ground up, but what about all that E-Waste? If these mini PC's don't get sold to people like me, they'll end up in landfill. Most people don't realise it but to make the average computer you need:

  • Gold
  • Silver
  • Platinum
  • Palladium
  • Cobalt

and with companies refreshing these devices every 3-6 years, that's a lot of materials being dumped. So, as part of this project I wanted to give old hardware a new life!

In addition to keeping these computers out of landfill, I picked something that was low power. These HP Mini PC's idle at <7w, meaning to run these things all year should cost me less than £20. Combine that with the solar panels on my roof and this entire project runs itself from the power of the sun and nothing had to be specifically manufactured to make it happen, every component is reused and the power supplied renewable.

Solar Panels

Moving Forwards

I plan on keeping this category alive and writing up what I got up to with my HomeLab. Hopefully it'll be useful to anyone that is interested in this kind of thing and at the very least it'll be good documentation for me when I inevitably break something!

Browser Extensions. We've all probably used them at some point. Whether it's to block adverts or to improve the functionality of websites, browser extensions have proven to be nifty little gadgets for well over a decade. It was this nifty-ness that prompted me to become interested in them. They we're easy to distribute - people just clicked "install" in an app store, they were lightweight, yet had the power to transform the internet - count me in!

So, long story short I developed an extremely niche browser extension which now has over 100 users worldwide, it's open source, and has been growing steadily since it's release.

Focus Search

The processes has been extremely rewarding on multiple fronts. I first came up with the idea for Focus Search whilst at university. When researching for an essay, or trying to solve a bug in my programming work, I'd do what everyone does - go to a search engine or website and search for my issue. The problem was I always found I'd have to manually click inside the search box on any given website to begin my search. It seemed obvious that if I were going to a library, directory, or eCommerce website I already have an idea of what I'm looking for in my head, I'm not going to click around to browse, so why not have the search bar automatically "in focus" ready for me to type? Why force me to tediously move my hands from my keyboard, click the search box, then go back to typing each time?

I'm aware that this seems like a very minor inconvenience - if we can even call it that, but for people doing 50-100 of these searches per day, it because extremely annoying! (I did warn you this addon was niche).

Anyway - I took this idea to my university supervisor planning to create it for my "final year project". However, I was told in no uncertain terms that it was pretty useless, served no real world purpose and wasn't really complex enough to be considered for the project. Annoyed, I shelved the idea whilst I continued my studies. At this time, i'd conducted market research to see if anything like my idea existed, because why go to all the trouble building it if I could just click "install" and have this problem solved for me? Well, it didn't exist. I checked the Chrome Addon Store, the Mozilla addon store and even looked around forums - nobody was offering anything.

Cut to 4 months later after I'd finished university, I revisited this idea as I was still annoyed by the way the web worked. To my surprise, not one but three addons had now popped up solving the issue I'd originally identified. Even more surprising, all together they had thousands of users! This meant not only had other people created a solution to the problem I'd originally identified, but thousands of people were waiting in the wings actively looking for a solution to this problem as well!

Armed with this exciting new knowledge I set to work immediately. The first step was of course, market research:

  1. What were my competitors offering?
  2. How have they implemented the features?
  3. What do their users praise it for on the reviews page.
  4. How does it look, is it simple to use?
  5. Why do people go back?

Those were the 5 criteria I focused on. My thinking was that if I could match, or even beat them at these 5 things, I'd be in a good place. So, after hours of testing out my competitors addons (thanks guys!) I got to work...

The first thing that became clear was that the second the user hits install, the addon needs to be working. No setup screens, no interaction from the user, just install and away you go. The implementation step was fairly simple. The whole point of the addon is that it automates an action for the user, so the implementation of the feature is, well, automatic. The second a search bar is seen by the addon it highlights it ready for the user to type. Where I differentiated myself was allowing the user to also trigger it with a keybind. This added flexibility and was something I'd seen people request in the reviews section on my competitors pages - score 1 for Focus Search!

It was at this point I realised I had what the Agile software development methodology calls a "Minimum Viable Product" this meant that whilst it wasn't perfect, the addon was already in a state where it did what I set out to do. So, it was at this point I released it into the world! I was already weeks behind my competitors, so I had no time to waste. I threw some branding together, created a quick and snappy video trailer for the store page and made my work public!

With the addon now out there and gaining users, albeit in it's Minimum Viable Product form, it was time to transform it into something I was proud of, and something that users would enjoy using. After all, if you look at Apple and Android phones they arent really that different. They both call and text, they both let you use apps, they both have cameras and finger print scanners... And yet Apple has a cult following. Why? Ease of use. I'd be fighting a similar battle with my addon. Since my competitors addons achieved the exact same end goal as mine, how would I differentiate myself? How would I keep people coming back and make new users want to use my addon? A sleek user interface with simple controls was needed.

So that's exactly what I created.

Focus Search UI

People don't care about the addon, not in the same way I do. They just need to understand which button represents it, and know how to interact with it. The interface for Focus Search features a small logo, representing the addon - whenever users see this logo they'll know what it links to. Below that, a great big Blue toggle button right in the middle of the screen with text reading "ON" so that the user can be in no doubt that the addon is switched on and functioning. When they click this toggle, the button changes to a Grey colour with text reading "OFF" so that... you guessed it, the user can be in no doubt the addon is switched off and won't function until they decide to turn it back on.

Now for my big brain move I thought to myself, what about user retention? What about when someone starts to question whether this Focus Search thing is even working, or if it's helping them at all? So, I decided to add a Time Saved metric to the bottom of the addon. This metric is tracked in the background whenever the addon triggers and tallies the amount of time the addon has saved the user. THAT was my golden idea. Appeal to self interest! Let the user know that the addon is working FOR them. It's all well and good having a great piece of software, but if the user doesn't understand WHY it's great, they'll never care!

Below that I've added a button that links to the Focus Search website. This was purely to give the addon some credibility. Yes it might look nice and yes it may function well, but who is behind it? Some savvy users might want to know a bit more about a "company" before they go and install their software on their computers. So, I created a short but snappy page explaining the history of Focus Search and who makes it - ME!

Hello, World!

If everything has clicked into place (and I haven't broken anything) you're reading my very first blog post!

My website started off as a static portfolio, which was great at first. It did exactly what I needed- it got me a first in my web development module at university as well as showcasing my work to future employers, without too many bells or whistles. But in all honestly, it was a bit... too static. So I've decided to add some life to it with a blog... because the internet needed more of those!

Before I could make this blog, I needed a way to actually update the website fast, as making a blog post should be quick and painless. My previous workflow was as follows:

  1. Code any updates in VS Code

  2. Commit those updates to GitHub

  3. Upload the local files manually to my hosting provider.

    • Inevitably fumble with a 2FA code somewhere along the line.
  4. Hit the deploy button and beam the updates to the interwebs.

This was... Tedious. It didn't lend itself well to creating content. Well, not anymore! I've streamlined the whole thing thanks to some clever work behind the scenes. I discovered that my hosting provider plays well with GitHub. This has allowed me to directly link my hosting with my websites GitHub repo, so every time I push updates to the main branch, boom - the changes are live. Thanks to this, my workflow now looks like this:

  1. Make the changes in VS Code.

  2. Commit the changes to GitHub - the changes are now live.

And just like that, I've halved my workload whilst also adding a blog!

The last piece of the puzzle was figuring out where to put this blog, and that's where subdomains came to the rescue! I still wanted future employers and curious people to have a streamlined and neat way of learning about me, so I've kept the static site as it always was, and then span up 'blog.ben-mcleod.com' and now here we are, the main site isn't cluttered with blog posts and the only people that have the misfortune of seeing my ramblings are the poor souls that discover this page!