Char
A char is a data type used to represent a single character or character code unit.
It appears in many programming languages, but the exact size, encoding rules, and behavior depend on the language and platform.
What it does
A char stores one character-like value.
It is commonly used to:
- Represent a single letter, digit, or symbol
- Work with low-level string processing
- Parse text one character at a time
- Store character code values in some languages
- Support text-oriented logic in systems and applications
Core concepts
Single character value
A char is usually meant to hold one character rather than a full string.
That is the main difference between character types and string types.
Language-specific behavior
char is not perfectly consistent across languages.
Some languages define it as a primitive type, some handle Unicode differently, and some do not expose char as a core standalone type at all.
Character vs number
In some languages, a char can also map directly to a numeric code value under the hood.
That is why character and integer behavior sometimes overlap in lower-level programming.
Common use cases
- Tokenizing text
- Parsing source or input one character at a time
- Working with delimiters and separators
- Handling low-level text or encoding logic
- Representing a single symbol in code
Practical notes
charis a language concept, not a universal data representation standard.- A character and its encoded byte representation are related but not identical ideas.
- Many modern application workflows operate more often on strings than on individual
charvalues. charis especially relevant in lower-level languages and parsing logic.
Sources Used
Frequently Asked Questions
Is char the same as string?
No. A char represents one character-like value, while a string represents a sequence of characters.
Is char always one byte?
No. That depends on the language and encoding model.
Do all languages have a char type?
No. Some do, some expose character handling differently, and some emphasize strings instead.