What is Computer Science?

Introduction of Computer Science

  • The study of computers and computing, hardware, including their theoretical and algorithmic foundations, and software, and their uses for processing information.
  • Computer science draws a number of its foundations from mathematics and engineering and thus incorporates techniques from areas like queueing theory, statistics and probability, and electronic circuit design.
  • The discipline of computing includes the study of algorithms and data structures, modeling data and knowledge processes, computer and network design, and ai.
  • computing also makes the design, heavy use of hypothesis testing, and experimentation during the conceptualization, measurement, information structures, and refinement of the newest algorithms, and computer architectures.

Computer science is taken into account as a part of a family of 5 separate yet interrelated disciplines:
Computer Science
Software Engineering
Information Systems
Computer Engineering
Information Technology

Computational Thinking

Computational thinking is an approach to solving problems using concepts and ideas from computer science and expressing solutions to those problems so that they can be run on a computer. As computing becomes more and more prevalent in all aspects of modern society not just in software development and engineering, but in business, the humanities, and even everyday life understanding how to use computational thinking to solve real-world problems is a key skill in the 21st century.

Computational thinking is built on five pillars:

Decomposition

Pattern Recognition

Data Representation

Abstraction

Algorithms

Understanding the differences between Binary vs ASCII formats

  • ASCII Data Format:
    ASCII stands for “American Standard Code for Information Interchange. The ASCII protocol is made up of data that is encoded in ASCII values with minimal control codes added. The control codes that are added are interpreted by the printer. Parallel, Serial, and Ethernet medium all support ASCII communication and consider it to be the standard.
  • Binary Format:
    Binary encoding more commonly known as BCP (Binary Communications Protocol), is made up of values in the range of 0–255. Binary encoding emphasizes compactness over the efficiency of generation or interpretation. Most elements of the language, such as integers, real numbers, and operator names, are represented by fewer characters in binary encoding than in the ASCII encoding. Binary encoding is generally used in environments in which communication bandwidth or storage space is minimal. Communication media such as Appletalk and Ethertalk support the binary protocol.

What is Algorithms and its Complexity?

An algorithm is a specific procedure for solving a well-defined computational problem.

Algorithm development is more than just programming. It requires an understanding of the alternatives available for solving a computational problem, including the hardware, networking, programming language, and performance constraints that accompany any particular solution. It also requires understanding what it means for an algorithm to be “correct” in the sense that it fully and efficiently solves the problem at hand.

The time complexity of an algorithm signifies the total time required by the program to run till its completion.

The time complexity of algorithms is most commonly expressed using the big O notation. It’s an asymptotic notation to represent the time complexity. We will study it in detail in the next tutorial.

Time Complexity is most commonly estimated by counting the number of elementary steps performed by an algorithm to finish execution.

What is pseudocode?

In computer science, a pseudocode is a plain language description of the steps in an algorithm or another system. Pseudocode often uses structural conventions of a normal programming language but is intended for human reading rather than machine reading.

Storing Data in Memory

Computer memory is technically any type of electronic storage. Without it and without access to it, a computer is just a useless box. At the core of the computer is the central processing unit or CPU, the source of control that runs all programs and instructions. In order to function, computers use two types of memory: primary and secondary. The main storage is the primary memory, and data and programs are stored in secondary memory. However, memory is not stored in the CPU, but the CPU would only be a Memory and Storage.

While the terms ‘primary memory’, ‘main storage’, ‘primary storage’, ‘internal storage’, ‘main memory’, and ‘RAM’ are used interchangeably, the most common term used is random access memory or RAM, which is the data that contains instructions for processing computer operations: like the hapless gardener, the memory is used only for as long as the program needing it is running. Some of the reasons that a computer only needs memory for processing include:

  • The computer needs to be powered on for most programs to run; once the power is off, the memory storage is wiped out.
  • No single program can use all the memory. Programs running simultaneously need to share a memory, meaning it is split among those program
  • The memory storage may not be big enough to hold the processing data, so it is released when it’s done.

Datatypes to Store Data

In computer science and computer programming, a data type or simply type is an attribute of data that tells the compiler or interpreter how the programmer intends to use the data. Most programming languages support basic data types of integer numbers (of varying sizes), floating-point numbers (which approximate real numbers), characters, and Boolean. A data type constrains the values that an expression, such as a variable or a function, might take. This data type defines the operations that can be done on the data, the meaning of the data, and the way values of that type can be stored. A data type provides a set of values from which an expression (i.e. variable, function, etc.) may take its values.

--

--

--

Engineer

Love podcasts or audiobooks? Learn on the go with our new app.

Recognizing handwritten text is a problem that can be traced back to the first automatic machines…

DENSO Developed Factory-IoT Platform to Link 130 Factories — IoT — Internet of Things

James’ Programming Journey part 9

Journey of DLithe Bootcamp .NET Full Stack Developer - Week 6

Jenkins integration with Prometheus-Grafana (Data Persistent)

Hidden Gems of Python

Secure Angular configuration/credentials in containerized environment (as of 2021)

System Architecture — LPIC-1 101–500 — Linux Certification Tutorial

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Suraj Bhute

Suraj Bhute

Engineer

More from Medium

Master’s Degree in Computer Science: Is It Worth It?

A* algorithm, a simple explanation

[English] Udemy Review TensorFlow Developer Certificate in 2022: Zero to Mastery

KRUSKAL’S ALGORITHM FOR FINDING MST