Okay, I work as a programmer, and there is a reason projects work the opposite way. You first have to have a working product that comes back as good from whatever QA you have, then you optimise and build on it. If you have to optimise on day 1, nothing will ever get done. I should know, that’s why I have a ton of personal projects in development hell.
I mean, putting in a bit of thinking before you actually hit the keyboard can be an incredibly effective form of optimization, if you can get for example an O(n^2) down to an O(log n). You’ll even save time on not having to rework the thing later, and if you build on poor foundations, chances are you’ll stumble upon fundamental architectural challenges down the road, which can be extremely costly in terms of development time.
Yep, taking some care early on can pay dividends down the road. The data structures you choose really matter, and YAGNI can stop you from going overboard with indirection and other shit. Premature optimization is bad, but there’s nothing wrong with writing performant software as long as it’s still comprehensible and extensible.
I’m currently working on a project that has been optimized from the start. No one understands the state of objects or what is supposed to happen in which order (because the lead implemented his own special way of lazy loading). So we have a lot of bugs and everyone is always double checking everything, killing any optimization that might have been there.
I work in games, the reason it works the opposite for them is because Unreal Editor is a product that is also shipped.
Sadly for most of us, the tools used to make the game (that includes the engine) are for internal use only, and most of the time there is no army of programmers available to do all of the work ahead of time. So it pays to wait and focus on the hot path used by the game you are shipping right now and not a hypothetical one you might ship later.
I can build everything in one level at the start or I can build it across multiple levels and stream it.
Which one I do should be done at the start.
And of course if I’m targeting a 4090 then hoping to slap DLSS on it then it’s not going to work. I could pull a TI and turn AO off then pretend UE5 is the problem but it’s really just a developer issue.
You can absolutely make unit and even integration tests in games, but I agree that I don’t think it’s really done because of the domain. Things are more caught in QA or more like Early Access these days.
Okay, I work as a programmer, and there is a reason projects work the opposite way. You first have to have a working product that comes back as good from whatever QA you have, then you optimise and build on it. If you have to optimise on day 1, nothing will ever get done. I should know, that’s why I have a ton of personal projects in development hell.
Why would games be different?
I mean, putting in a bit of thinking before you actually hit the keyboard can be an incredibly effective form of optimization, if you can get for example an O(n^2) down to an O(log n). You’ll even save time on not having to rework the thing later, and if you build on poor foundations, chances are you’ll stumble upon fundamental architectural challenges down the road, which can be extremely costly in terms of development time.
Yep, taking some care early on can pay dividends down the road. The data structures you choose really matter, and YAGNI can stop you from going overboard with indirection and other shit. Premature optimization is bad, but there’s nothing wrong with writing performant software as long as it’s still comprehensible and extensible.
I’m currently working on a project that has been optimized from the start. No one understands the state of objects or what is supposed to happen in which order (because the lead implemented his own special way of lazy loading). So we have a lot of bugs and everyone is always double checking everything, killing any optimization that might have been there.
I work in games, the reason it works the opposite for them is because Unreal Editor is a product that is also shipped.
Sadly for most of us, the tools used to make the game (that includes the engine) are for internal use only, and most of the time there is no army of programmers available to do all of the work ahead of time. So it pays to wait and focus on the hot path used by the game you are shipping right now and not a hypothetical one you might ship later.
I can build everything in one level at the start or I can build it across multiple levels and stream it.
Which one I do should be done at the start.
And of course if I’m targeting a 4090 then hoping to slap DLSS on it then it’s not going to work. I could pull a TI and turn AO off then pretend UE5 is the problem but it’s really just a developer issue.
Games don’t have unit tests so you can’t quickly make code changes without breaking the world
Who knows if the engine is any different
You can absolutely make unit and even integration tests in games, but I agree that I don’t think it’s really done because of the domain. Things are more caught in QA or more like Early Access these days.