I last left the project with the player being able to pick up and drop tiles on the board, but with no implementation of the game logic that I’d spent all that time building up.
In this post I’m going to add the game logic in to the drag and drop action.
My goals are:
- To show a highlighted “drop zone” on the board as the player moves a tile around. The drop zone will show them where the tile can be dropped
- To “snap” the tile into the drop zone if the drag is ended while the tile is in a permissible position
- Otherwise, to snap the tile back to its position at the edge of the board
- Sort out the messy pickup of tiles on a busy board
After some brief experimentation I decided that live views weren’t good enough to use for playing around with an interactive board and gestures. Also, once you get to dealing with gestures and user interaction with something like this, you really need to be working on a device - things that feel fine with a mouse or trackpad and pointer can be no good at all when you’re working on a device.
In this post I’m going to talk about adding gesture recognisers and transferring to a full project.
It’s time to move on from the character-based visualisations of the board and the tiles, and create some views.
Each tile will be represented with a view, and the board will be a view. Placed tiles will be added as subviews of the board, which will simplify the drawing and positioning logic.
So far my
Boardmodel is smart enough to tell if a
Tilecan be placed in a specific location. Now I need to think about what happens when the player actually places the tile. How does the board update its model? What needs to happen here?
It’s time to think about some game logic. The first thing a player will do is try to place a
Board. How can I tell if the move should be allowed?
To solve this problem I ended up creating a
GeneratorType, which is the main focus of this part.
In part one I went through the process of building the Tile model objects for my Pentominoes puzzle app. In this part I will talk about making the Board, and what I learned about protocols and default implementations in the process.
My daughter and I are members of At-Bristol, a most excellent interactive science centre in Bristol. On a recent visit she was captivated by a “Pentominoes” puzzle. Pentominoes are the twelve possible tile shapes you can make using five squares, joined at their edges. They are a little like tetris shapes, but with five squares instead of four.
I wrote an article for the MartianCraft blog about the Apple Watch, and what makes a good or bad watch experience.
Over the last few releases of iOS, things got complicated. First, we were able to share storyboards between iPad and iPhone projects, thanks to autolayout and size classes. Next, it turned out that iPad apps could be shrunken down to iPhone size, stretched out and shrunk back again during multitasking. Apps had to adapt themselves to different sizes at runtime, making sure that they displayed relevant content, appropriate to the current size.
Apple’s solution to this is
UISplitViewController. On the iPad, this maintains a two-column interface, with a smaller “primary” or “master” view controller on the leading side, and a larger “secondary” or “detail” view controller on the trailing side. On the iPhone, only one view controller is visible. Before multitasking, developers could get away with copy-pasting a delegate method from the template code, maybe checking
UIUserInterfaceIdiomin a few places, and the split view would work nicely on both devices without anyone having to think too much. Since multitasking, more thinking is required.
I recently completed a project involving a WatchKit app. It was not a pleasant experience, so here’s a screed of vague complaints with some half-baked possible solutions, and a possible ray of sunshine at the end.