I come from an MVC background (Flex and Rails) and love the ideas of code separation, reusability, encapsulation, etc. It makes it easy to build things quickly and reuse components in other projects. However, it has been very difficult to stick with the MVC principles when trying to build complex, state-driven, asynchronous, animated applications.
I am trying to create animated transitions between many nested views in an application, and it got me thinking about whether or not I was misleading myself… Can you apply principles from MVC to principles from Artificial Intelligence (Behavior-Trees, Hierarchical State Machines, Nested States), like Games? Do those two disciplines play nicely together?
It’s very easy to keep the views/graphics ignorant of anything outside of themselves when things are static, like with an HTML CMS system or whatever. But when you start adding complex state-driven transitions, it seems like everything needs to know about everything else, and the MVC almost gets in the way. What do you think?
Update:
An example. Well right now I am working on a website in Flex. I have come to the conclusion that in order to properly animate every nested element in the application, I have to think of them as AI Agents. Each “View”, then, has it’s own Behavior Tree. That is, it performs an action (shows and hides itself) based on the context (what the selected data is, etc.). In order to do that, I need a ViewController type thing, I’m calling it a Presenter. So I have a View (the graphics laid out in MXML), a Presenter (defining the animations and actions the View can take based on the state and nested states of the application), and a Presentation Model to present the data to the View (through the presenter). I also have Models for value objects and Controllers for handling URLs and database calls etc… all the normal static/html-like MVC stuff.
For a while there I was trying to figure out how to structure these “agents” such that they could respond to their surrounding context (what’s selected, etc.). It seemed like everything needed to be aware of everything else. And then I read about a Path/Navigation Table/List for games and immediately thought they have a centrally-stored table of all precalculated actions every agent can take. So that got me wondering how they actually structure their code.
All of the 3D video game stuff is a big secret, and a lot of it from what I see is done with a graphical UI/editor, like defining behavior trees. So I’m wondering if they use some sort of MVC to structure how their agents respond to the environment, and how they keep their code modular and encapsulated.
3
Answers
Keep this in mind:
The things which need to react simply have to be aware of the things to which they need to react.
So if they need to know about everything, then they need to know about everything.
Otherwise, -how- do you make them aware? In 3D video games stuff, say first-person shooters, the enemies react to sound and sight (footsteps / gunshots and you / dead bodies, for instance). Note that I indicated an abstract basis, and parts of the decision tree.
It might be wrong in your specific case to separate the whole thing between several agents, and simpler to leave it to one main agent who can delegate orders to separate processes (/begin babble) : each view could be a process which could be told to switch to any (a number of) view by the main agent, depending on what data the main agent has received.
Hope that helps.. Take it all with a grain of salt 🙂
Of course. 99.9% of the AI is purely in the Model. The Controller sends the inputs to it, the View is how you represent it on the screen to the user.
Now, if you want to start having the AI control something, you may end up nesting the concepts, and your game ‘model’ contains a Model for an entity, a Controller for the entity which is the AI sending commands to it, and a View for the entity which represents the perceptions of that entity that the Controller can work with. But that’s a separate issue from whether it can ‘play nicely’. MVC is about separating presentation and input from logic and state and that aspect doesn’t care what the logic and state looks like.
It sounds like you need to make more use of the Observer/Event Aggregator pattern. If multiple components need to react to arbitrary application events without introducing undue coupling, then using an event aggregator would help you out. Example: when an item is selected, an application event is published, relevant controllers tell their view to run animations, etc. Different components aren’t aware of others, they just listen for common events.
Also, the code that makes the view do things (launch animation depending on model/controller state) – that’s part of the View itself, so you don’t have to make your architecture weird by having a controller and a viewcontroller. If it’s UI specific code, then it’s part of the view. I’m not familiar with Flex, but in WPF/Silverlight, stuff like that would go into the code-behind (though for the most part Visual State Manager is more than enough to deal with state animations so you can keep everything in XAML).