This term is vague enough that most of us have some intuitive understanding of it but there's no consensus about what it actually means.
The following is my own definition, based on my experience and biases, but I find it useful for communication — particularly with managers1.
Software architecture (or lack thereof) is what determines the cost, in both time and money2, of adding a new feature to an application.
With ideal architecture, adding feature X to a code base will take the same amount of time/money/effort regardless of how big or small the codebase is, how many features it has or how many developers are working or have worked on it.
The opposite would be a code base so badly written it takes exponential time/money to add a new feature.
Those are the two ends of the spectrum that only exist in fiction. Real projects exist somewhere in the middle, closer to one or the other end, and usually wind up needing a complete rewrite after 2 to 7 years.
- I've been in interviews with managers, principal engineers and CTOs in which I probed into the company's architecture practices and policies, and I usually got a "We care about keeping developers happy, so we dedicate some time to pay technical debt from time to time". This was a strong red flag, but unfortunately it's a very common situation.
- Time is money in two ways: one is determined by salaries, bills, etc. That one is easy to quantify. There's a second way in which time is money: time to market. Spending more time developing an application not only means you'll spend more money on it, it also means you might miss opportunities. This cost of opportunity is arguably impossible to calculate, even estimate or guesstimate, so it's less confusing to speak of it as pure time and not as part of money.