Graphical User Interface

Share

What is a Graphical User Interface?

Graphical user interface (abbreviated GUI) is a term that refers to windows, buttons, menus, icons, indicators, text labels, and other virtual objects displayed on the screen of a computer, that can be interacted with using the mouse, the keyboard, or other hardware interface such as a touch screen, necessary for a human to control a specific program: such as an application, operating system, or website.

A screenshot of krita levels filter settings as shown in the new filter mask dialog.
A screenshot of Krita's "Levels" filter settings as shown in the new filter mask dialog. This is the GUI to add a new filter mask, with the GUI to configure the Levels filter displayed inside of it. It uses a list-detail layout.

The term contrasts with command-line interfaces (CLI), which lets a human control a program by typing commands in a text-based command-line (i.e. a terminal). The CLI of an operating system's kernel is also called its shell. Nowadays we also have voice user interfaces (VUI), which are voice-activated, but are essentially the same thing as a CLI.

In any user interface (UI), human interaction is called "input," and information displayed to humans is called "output." For example, when you press the physical mouse button on a button on the screen, that's input for the GUI. When the program makes the button on the screen appear pressed in response to user input, that's output for the human user.

Consistency of UIs across applications or parts of an application is decided by Human Interface Guidelines (HIG). The HIG dictates things such as which terms to use (e.g. "quit" or "exit"), the order of the OK/Cancel buttons, the size of clickable areas, the spacing, etc. Those who do not have a HIG document use common sense, and those that do not have common sense design bad interfaces.

For examples of HIGs, see: [KDE] [XFCE] [Apple].

Another term is UX (user experience). User experience design is about designing around "user experiences." A facet is that a friction-less and empowering experience with a UI is more important than a UI that is more powerful but frustrating to use. One would suspect it's about designing usable, intuitive software, but it could be about adding lots of animations, fading, and sliding on every button, and removing every single useful tool of an information-dense application so that the interface looks "clean." As someone who writes an article about user interfaces, I can't sympathize with someone who can barely use a computer. If we open the same app, our experiences would be vastly different, because our expectations would be vastly different. For me, if an application doesn't have a menubar, I don't want to touch it. Some people say it's better UX without a menubar. I can't fathom this. It makes no sense to me. I'll never understand it. For me, you can measure how good an app is by how many items its main menu and submenus have. Some think it's the inverse: the app should have only one button and do one thing only. That's to say that it's not possible to design a good UX for everyone, just like it's not really possible to design a good UI for both touch screen and mouse. If I'm on a touch screen, I need large click targets. If I have a precise pointing device, I want small click targets and denser information. Picking one side worsens the experience for the other side.

GUIs are extremely complex compared to CLIs, and it's very easy to program a very bad GUI program. In order for a GUI to work, it must process user input and update what's being displayed on the screen. This requires a "loop" that should last until the program is terminated, which typically happens right after its main window is closed. We call instances of user input "input events" in such system. A naive implementation of a GUI's main event loop would use 100% of a CPU because it would check if there are new input events immediately after updating the GUI's appearance, so the CPU's program would constantly be either checking for new events or displaying the graphics. A more sane approach requires leveraging the operating system API (application-programming interface): the application tells the operating system that it doesn't need to do anything until there is user input, so the operating system simply stops the CPU from executing the application's program until necessary. Because different operating systems have different APIs for this, in order to create a cross-platform GUI you would need to know the API of each operating system you target. Consequently, most cross-platform apps don't code their own GUI toolkit, they use an already-made, well-tested toolkit made by programmers who know these details, such as Qt or GTK. On Windows, the basic API for displaying things on the screen is called the Win32 API. On Linux, it's possible to run some Windows applications through WINE which provides an abstraction layer for Windows applications. Essentially what this means is that Windows apps think that they're interfacing with the Win32 API on Windows, but they're actually interfacing with WINE's implementation of the Win32 API on Linux. Since they're getting the expected outputs for their inputs, they programs don't know the difference, and they work as expected.

Comments

Leave a Reply

Leave your thoughts! Required fields are marked *