We present ASD (Action, Sequence, and Divide), a new framework for
Hierarchical Reinforcement Learning (HRL). Present HRL methods construct
the task hierarchies but fail to avoid exploration when tasks are to be
performed in a particular sequence, resulting in the agent needlessly
exploring all permutations of the tasks. When the task hierarchies are
used as an ASD framework, the RL agent encounters better constraints,
preventing it from pursuing policies that are not valid, thus enabling
the agent to achieve the optimal policy faster. The hierarchies created
using the methods explained in this paper can be used to solve new
episodes of the same environment, as well as similar instances of the
problem. The hierarchies generated with an ASD framework can be used to
establish an ordering of tasks. The objective is to not only to complete
the tasks but also give the agent insights into the sequence of tasks
that need to be performed in order to correctly solve a problem. We
present an algorithm to generate the hierarchies as an ASD framework.
The algorithm has been evaluated on some of the standard RL domains,
namely, Taxi and Wargus, and is found to give correct results.