Graduate/Junior Programmers - How To Prep For An Interview
- Details
- Category: Blog
Following up on my previous article about how to get an interview for graduate or junior programming positions, this article will provide some insight on what you can expect at an interview.
This has to have a large caveat, that interview experiences can vary massively depending on the studio, and the interviewers, but I can give some general information based on my experiences having done this at several companies, but bear in mind, this is much more a personal take on my part. Your experience can and will vary.
By this stage, it's likely you will have completed a technical programming test, and I will have reviewed your submission before the interview and taken notes. I'll also have looked through your portfolio and jotted down a few things to quiz you on.
First up, try and relax (easy to say, I know). If you do get nervous and a bit flustered, it's totally fine to ask for a moment to compose yourself and reset. Have a bottle of water to take a pause with. As a hiring manager, I am very aware that the interview environment is NOT the same as a work environment. I have personally hired someone that had quite a meltdown in interview, but I could see it was nerves and they were otherwise a strong candidate. I will take the pressure into account.
You're not going to have much work experience, so we won't spend much time on your CV. I'm not hugely interested in any of your non-games experience or part time jobs, but anything that shows me you can work in a team environment is a positive.
I will start asking questions of your coding test/portfolio code. I'll treat it like any other code review and if I see things I don't agree with, I'll ask why you've done things the way you have to try and get an understanding of your thinking. I'll also pick out things in your code that I think are really good, and make sure that you know why it's good. I will figure out quite quickly if you've cheated and used AI to generate your submissions π
It's hard to put a comprehensive list together, but some fundamentals I'll want to establish you understand in C++
- Scope and lifetime of objects and members
- Constructors and destructors
- Stack vs heap allocation
- Common containers (vector, map, list etc.)
- Strings and issues related to use of them
- Alignment and padding
There are some relevant topics that you can get bonus points for demonstrating an understanding of, such as:
- For Unreal roles, understanding the problems of using lots of blueprints and/or object ticks, and strategies to mitigate them
- An understanding of object-oriented vs data-oriented programming
- Tools and methods for finding and fixing a framerate hitchΒ
For graduates, I'll ask about your group projects. I'm not going to focus on your individual contributions, but will ask some questions about your experience of working in a team environment. This is where you can quite easily get yourself in the 'No' pile. Game development is a difficult profession, and needs lots of people to work together in a highly pressured and dynamic environment, so the ability to play nice with other people is absolutely non-negotiable. Launching into a rant about how other people weren't pulling their weight or otherwise having a moan is not going to score you points. I will probe you on challenges you faced working in a group environment, but what I'm really interested in is how you responded and how you resolved the situation.
Depending on timings, I'll put a few general tech question to you, about game engines, recent games, cool tech etc. A warning here, arrogance is an extremely undesirable character trait in anyone, but especially in programmers, and even more so in very inexperienced programmers. Giving me your confident assertion that feature X in engine Y or game Z is garbage because you did something better in a uni module or tech demo, is not going to go down well.
Lastly, I'll always finish with giving you an opportunity to ask questions. Make sure you have a few, it looks very poor to not even have a couple written down, as though you've not bothered to prep.
UESVON Is No More - Introducing Aeonix Navigation
- Details
- Category: Unrealengine
On a previous blog post, I detailed some ongoing work I was doing to tidy up UESVON, the sparse voxel octree 3D navigation system I created for Unreal. It's been 8 years (yikes) since I created it, and while functional, it was quite rudimentary and a long way from being production ready.Β
Since departing Ubisoft I've embarked on a big cleanup and refactor of the plugin, and even toyed with the idea of really investing in it with a view to licensing it, but ultimately decided I'm happy to just have it out there for people to use for free and as a portfolio piece for myself. Anyway, it had always bugged me how much I get nagged for not using pascal case module names (because of the acronym), so I started from scratch with a new name, Aeonix Navigation (thanks ChatGPT).
Aside from the underlying algorithms and generation code, I have completely rewritten the entire architecture of the system. Rather than being quite Actor centric before, it's now all driven by a central World Subsystem that manages navigable volumes and agents in the world. Basically everything has been rewritten with an extra 8 years of engineering experience applied to it. Some key items :
- Central Subsystem to manage all volumes and agents
- Reworked async pathfinding system
- Volumes and agents are represented in Mass as entities
- New debugging tools including dummy editor actors for testing and visualising pathfinding
- New heuristic parameters
- New path optimisations
I have a little more work to do before this initial feature set is complete, then I will make the repository public.
You'll notice I have created a system to handle interchange between Actor and Mass entities, this currently does not expose any new functionality, but will enable utilisation of efficient ECS patterns for future functionality such as steering behaviours and avoidance.
Graduate/Junior Programmers - How To Get An Interview
- Details
- Category: Blog
Sharing here a post I made on LinkedIn recently. I wanted to provide some general advice for graduates/juniors seeking gameplay programming positions. These insights come from reviewing hundreds of applications and many years as a hiring manager across various studios. This ended up being rather long so will be split into two posts.
Step 1 - Securing an Interview
During this stage, hiring managers will be examining potentially hundreds of applications to create a shortlist for the next phase. Bear this in mind.
The primary thing I am looking for is actual code that you have written.Β
Ensure that itβs as straightforward as possible for me to access your best code from your CV with minimal clicks. I donβt have the time to search extensively for it, so please pin the repositories you want me to review on your GitHub profile, which you have hyperlinked right at the top of your CV.
The code must be C++. Obvious exception here being if you want to limit your options and hitch your career to the Unity wagon, which I wouldn't advise. Using C++ and Unreal is never going to work against you in the games industry.
I am not interested in what you have developed using Blueprint in Unreal. At all. If your portfolio only consists of projects made in Blueprint, you will not be invited to interview for an engineering role.Β
When it comes to your code, by all means go to town and build your own engine and make a whole game, you'll walk this first stage in that case! But don't get bogged down being too ambitious. For a Gameplay Programmer role, I would be totally happy if you simply took an Unreal FPS template and implemented a new feature, such as :
- Minimap/marker system
- Pickup/inventory system
- Quest system
- Any little feature from your favourite game!
Include some documentation on your approach to designing the system, starting with requirements.Β
Automated testing coverage would be a big plus.
Cleanly-formatted, self-documenting code is vastly more desirable than reams and reams of comments describing every single line.Β
Don't leave commented out code in, this is a pet hate of mine. Learn to use source control.
Again, this is not to limit your ambition, more to say that something fairly simple but thoughtfully and competently executed, will do you better than a big ambitious project that is a big sprawling mess you never finished or had time to tidy up.
Finally, don't worry about your code not being good enough! I'm not expecting you to be producing the same AAA production-quality code that experienced seniors do, in fact making a few rookie mistakes will give me something to quiz you on at interview! The most important thing is that you wrote some code and I can see it.Β (I can't emphasise this enough)Β
If you can manage this and have a code repository that demonstrates an ability to take a feature requirement, design and implement it in engine with some nice tidy code and documentation, then you're going to put yourself right at the top of the pile for getting through to an interview stage, which I'll cover in my next post.
Using local LLMs to enhance productivity in Unreal development with Jetbrains Rider
- Details
- Category: Code
So you're interested in using the power of a Large Language Model to help out while programming in Unreal, but due to the usual game industry NDAs and security concerns, you can't have your code getting shipped out to external services? Well in this article I'm going to run you through the process of setting up your development environment so you can run an LLM locally and integrate it into Jetbrains Rider to provide code generation, analysis, recommendations, and autocomplete.Β
I'm using Jetbrains Rider for this for a few reasons. One, Visual Studio 2022 has a well-integrated solution for Github CoPilot, but the third party extensions for local LLMs I looked at are suspiciously black-box and frankly I don't trust them. Visual Studio Code does have some good extensions available, but using VSCode with Unreal is....not great.Β
Jetbrains Rider has good extensions all round, and is generally a nice IDE for working in Unreal, and is free for non-commercial use now.
Prerequisites
Firstly, you will need to have Jetbrains Rider installed. Download it here.
Secondly, you will need to install Ollama. Download it here.
Follow the instructions to get Rider and Ollama installed. You should be able to open your Unreal project solution in Rider and build and run the game/editor. You should have the Ollama server running (you'll see a llama icon in your system tray).
To check your Ollama setup, open a command prompt, and type 'ollama'. You should see the usage information. Type 'ollama list', and see that you have no models installed yet.Β
Continue.dev Install
Now you can install Continue.dev, download the plugin for Jetbrains IDEs here.
Once installed, you should have the Continue icon in your sidebar. Click it to open, you should see the chat window. Now we need to setup our local AI models.
Local LLM Install
There are three different models you need to setup in Continue. One is the chat model, which is the one you will interact with like ChatGPT/CoPilot, by asking questions. Another is the autocomplete model, which will inspect the text before and after your cursor, and suggest text to autocomplete what you're typing. The last is an embedding provider, that in layman's terms, is a model that will parse the text of your code project and transform it in a way that lets the main chat model reason about your code.
Ollama will try to run your LLMs on your GPU, if there is sufficient VRAM, but will fall back to your system RAM. Models will run faster if they are on the GPU, as you'd expect. Which models you use is going to depend massively on you hardware resources. I am running an nVidia 4070 with 12GB of VRAM, which is quite restrictive. I can run a small autocomplete model (~3GB) and Unreal no problems, but a larger 7 billion parameter model is going to eat ~9GB of VRAM. As such, you will need to think about which models you want running. To inspect what models you have running, and where they are resident, you can run ollama ps on the command line.
C:\>ollama ps
NAME ID SIZE PROCESSOR UNTIL
qwen2.5-coder:1.5b 6d3abb8d2d53 3.3 GB 100% GPU 17 minutes from now
qwen2.5-coder:latest 2b0496514337 6.0 GB 100% GPU 13 minutes from now
As you can see here, I have a couple of models loaded, both on the GPU. This isn't leaving much VRAM for Unreal and Windows. If I'm not using the Chat functionality, and just want autocomplete, I might want to stop the larger model, which I can do like so :
C:\>ollama stop qwen2.5-coder:latest
C:\>ollama ps
NAME ID SIZE PROCESSOR UNTIL
qwen2.5-coder:1.5b 6d3abb8d2d53 3.3 GB 100% GPU 14 minutes from now
As you can see, the 6GB model has been unloaded.Β
Ideally, you have a beefy GPU like a 4090/5090, with enough VRAM to happily accommodate several models and your game engine. Now, let's get into setting up the models we want to integrate into Rider.
Embedding Model
We will use the Nomic model for embeddings (parsing our codebase). To install it, open a command prompt and run :
C:\>ollama pull nomic-embed-text
pulling manifest
pulling 970aa74c0a90... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 274 MB
pulling c71d239df917... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 11 KB
pulling ce4a164fc046... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 17 B
pulling 31df23ea7daa... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 420 B
verifying sha256 digest
writing manifest
success
All done. You can check which models you have installed with :
C:\>ollama list
NAME ID SIZE MODIFIED
nomic-embed-text:latest 0a109f422b47 274 MB About a minute ago
Chat Model
This is the model you will use to ask questions of. Ideally you want the biggest model your hardware can support, but your hardware also needs to run Rider and Unreal, so compromises will be required. For this example, I'm going to use the qwen2.5-coder model, with 7 billion parameters, and 4-bit quantization. This will consume 8.9GB of memory. Install the model like this :Β
C:\>ollama pull qwen2.5-coder
pulling manifest
pulling 60e05f210007... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 4.7 GB
pulling 66b9ea09bd5b... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 68 B
pulling e94a8ecb9327... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 1.6 KB
pulling 832dd9e00a68... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 11 KB
pulling d9bb33f27869... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 487 B
verifying sha256 digest
writing manifest
success
Autocomplete Model
Now for the autocomplete model, I'm currently using a smaller version of qwen2.5, the 1.5 billion parameter version. It's still providing good insights, but is more responsive, which is important for an autocomplete model. It uses 3.3GB of memory.
C:\>ollama pull qwen2.5-coder:1.5b
pulling manifest
pulling 29d8c98fa6b0... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 986 MB
pulling 66b9ea09bd5b... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 68 B
pulling e94a8ecb9327... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 1.6 KB
pulling 832dd9e00a68... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 11 KB
pulling 152cb442202b... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 487 B
verifying sha256 digest
writing manifest
success
Continue.dev Configuration
Back in Rider, open the configuration panel in Continue.
Then 'Open configuration file'. This will open continue's config.json in Rider.
Embedding Provider Config
Add the following section to the config file. This tells continue to use the nomic-embed-text model from the local ollama install for embeddings.
"embeddingsProvider": {
"maxBatchSize": 32,
"provider": "ollama",
"model": "nomic-embed-text"
}
Chat Model Config
Next, setup your chat models. You can have multiple models configured in here, which will be accessible from a dropdown in the chat window. Add an entry for our local qwen2.5coder model under the models section.Β
"models": [
{
"title": "qwen2.5-coder:latest",
"provider": "ollama",
"model": "qwen2.5-coder:latest",
"apiBase": "http://localhost:11434",
"apiKey": ""
}
]
Autocomplete Model Config
Now configure our local qwen2.5coder:1.5b model as the autocomplete model :
"tabAutocompleteModel": {
"title": "AutocompleteModel",
"provider": "ollama",
"model": "qwen2.5-coder:1.5b",
"apiBase": "http://localhost:11434",
"apiKey": ""
}
Test Codebase Indexing
Now, open the 'More (...)' panel in Continue. Click 'Click to re-index' to test the embedding setup. You should see it parse your codebase and say 'Indexing complete'
Chat Usage
Now, lets try out the chat model. Go back to the chat window in Continue. I have a simple prototype project loaded, which includes my 3D pathfinding plugin. So lets ask about adjusting the pathfinding heuristics, I ask 'Where can I adjust the pathfinding heuristics?', and then press Ctrl+Enter.Β
Ctrl+Enter is important here, as this includes the @codebase context. Essentially what this does is look at your question, and find the most appropriate files in your workspace, and includes them in the chat context. I've included the output below.
Note that I've expanded the Context section here, so you can see that it's included the relevant pathfinding related files from my plugin.
The LLM has correctly identified that function that implements the pathfinding heuristics, and even suggested a new heuristic types, as well as shown me how to implement it. Pretty neat!
Autocomplete Testing
Testing autocomplete is as simple as inserting your cursor anywhere in a code file. Here I'm in the GetLifetimeReplicatedProps function, and it correctly suggested a DOREPLIFETIME macro, which I can just accept by pressing Tab.
Summary
So, that's the basic setup to get local LLMs integrated into your Unreal workflow with Jetbrains Rider. Once you start using the tools, you can experiment with different models, different sizes, and start finding more ways to enhance your workflow.
Page 1 of 24