No, I’m not saying I’m against this methodology. I’m on it, I can see much sense in it, it’s fine. But the more I look at the particular implementations the more the following analogy comes into my mind.
Imagine you want to build a skyscraper. So you just bring all the construction workers, truck drivers, welders, plumbers, architects etc. etc. into the venue and then you shout out of your loudspeaker: “Three! Two! One! Go!!!”. Will there something be built eventually? Probably, yes. Will that “something” have a trace of architecture in it? Probably, not.
So why do people expect anything different to happen in software development? I mean, of course, a dozen of world’s best coders, brought into one open-space and paid a 1K quid per hour - they will eventually develop something. That “something” will probably even work. But it will never have a single root idea in it, the code will not be structured, same pieces of functionality will be implemented in a variety of ways etc. etc. - the thing very well known as a “big ball of mud”. Even if that dozen consists of software architects only (actually, that will maximize the chances).
Obviously, software architecture itself should be agile. That means, it should allow the system to scale and the functionality to be extended, to accommodate future needs, as many as possible. So that you can actually do the agile development on top of it. But at least some fundamental principles, some basic DOs and DON’Ts, some most useful design patterns - they should have been defined and documented in advance! That is, before the sprints start.
You can’t handcraft a software architecture on your knee. You have to invest into it! This thought looks so obvious, so why do we see the opposite so very often?