The inevitable, in the mid-term, will be voice recognition to replace the keyboard while still using touch for commands too complicated to easily utter, like zooming and rotating on a particular part of a photo, for example. In time, even that will become a voice option as crucial command language evolves.
Beyond voice is integrated cerebral electrography where a device, or at least its interface, becomes part of the human anatomy, either worn or surgically integrated, and reads thought commands through the appropriate technology.
Beyond that, who knows, but I would offer this, that a receiver/receptor could be integrated with our bodies, probably a simply worn or skin-integrated device, which will not only allow us to upload our desired commands to a central HAL-like device, but will also present us with visual augmentation that allows us to see information we request with our own eyes, appearing before us as if we were inside the display panel. It might even go so far as to allow a dual mode, where we get cyber-augmentation in the form of business information displayed in our open-eyed vision right on the building itself as we see it, but also to be able to have a closed-eyed vision capable of giving us things like spreadsheets or presentations, or even games and movies, that we either verbally command through the HAL-unit, or possibly command our interface to display locally on a physical display or on the inside of our eyelids, an eyelid-display if you will.
Imagine the gaming experience with THAT!
And beyond THAT, an interface that allows us to tap into the unused processing power within our own brains.