<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>cengelsen.no (English)</title>
        <link>https://test.cengelsen.no/en/blog</link>
        <description>Blog posts from cengelsen.no — in English</description>
        <lastBuildDate>Mon, 26 Feb 2024 00:00:00 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <atom:link href="https://test.cengelsen.no/en/rss.xml" rel="self" type="application/rss+xml"/>
        <item>
            <title><![CDATA[We Should All Subscribe to Calendars]]></title>
            <link>https://test.cengelsen.no/en/blog/we-should-all-subscribe-to-calendars</link>
            <guid isPermaLink="false">https://test.cengelsen.no/en/blog/we-should-all-subscribe-to-calendars</guid>
            <pubDate>Mon, 26 Feb 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[We should normalize subscribing to calendar feeds so we never miss occurrences again. Plus, it's less cumbersome.]]></description>
            <content:encoded><![CDATA[<p>Perhaps not the most controversial or groundbreaking opinion, but we should normalize subscribing to calendar feeds. Lately, I&#39;ve become a bit fascinated with being able to import occurrences from others&#39; calendars into my own. This stemmed from the desire to find various occurrences I could attend outside of my usual routine.</p>
<p>So, I went through all possible organizers in my city i could find to see what occurrences they offered over the next six months. This surprisingly took quite a while and could definitely be made easier.</p>
<h3 id="definitions">Definitions</h3>
<p>This section might get a bit long and pedantic, so <a href="#how-is-the-situation-now">click here if you want to skip ahead</a>.</p>
<ul>
<li><p><strong>Organizer</strong></p>
<p> Any agent that organizes occurences. In this text, it is not the same as a website that aggregates occurrences from multiple organizers into the same platform.</p>
</li>
<li><p><strong>Calendar Feed</strong></p>
<p> A continuous stream consisting of occurences. The occurrences are objects in the <code>.ics</code> format.</p>
</li>
<li><p><strong>Calendar Subscriber</strong></p>
<p> Any agent that listens to a calendar feed.</p>
</li>
<li><p><strong>Occurrence</strong></p>
<p> A collective term for all objects in the calendar feed. This includes everything from events to parties, to volunteering opportunities, appointments, and meetings. Here, all types of occurrences are categorized according to my own logic:</p>
<ul>
<li><p>Event</p>
<p>An event is a type of occurrence where participants consume content provided by an event host or organizer. Participation is essentially voluntary, but there may be cases where other agents or participants require your participation.</p>
<p>If an event has a program schedule with multiple separate occurrences, each occurrence in the program schedule must be categorized individually. If the program schedule exclusively contains occurrences of another type, the entire event can be defined as that type. For example, an event may have a program schedule where all occurrences in the program schedule are an active party (see below). In such a case, the entire event is an active party.</p>
</li>
<li><p>Volunteering Opportunity</p>
<p>A volunteering opportunity is a type of occurrence where participants produce something or contribute labor. The point of the occurrence is to get work done as part of meeting a certain predetermined goal. This work is done to contribute to a community. Participation is essentially voluntary, but participants cannot cease work until the occurrence is completed.</p>
</li>
<li><p>Party</p>
<p>A party is a type of occurrence where participants both produce and consume. By this, I mean that every participant both takes from the occurrence and contributes to the occurrence. The purpose of the occurrence is social interaction, and participation is entirely voluntary. This type can be further divided into two subcategories.</p>
<ul>
<li><p>Active Party</p>
<p>A party that centers around an activity. In such an occurrence, there is an expectation, and sometimes a soft requirement, for the participant to actively engage in the activity.</p>
</li>
<li><p>Inactive Party</p>
<p>A social occurrence without an activity as the focus. In such an occurrence, there is no expectation or requirement for active participation. The focus is solely on social interaction.</p>
</li>
</ul>
</li>
<li><p>Appointment</p>
<p>A type of occurrence with the nature and purpose of a party. However, it differs in the sense of scope. An appointment is smaller and more formal in nature. An appointment always has a predetermined goal, while a party has not. This type can be further divided into two subcategories.</p>
<ul>
<li><p>Serious Appointment</p>
<p>This is a type of meeting where participants meet for productive purposes. The meeting is of a serious nature and often related to work.</p>
</li>
<li><p>Casual Appointment</p>
<p>This is a type of meeting where participants meet for casual purposes. The meeting is of a relaxed nature and often related to social interaction in leisure time.</p>
</li>
</ul>
</li>
<li><p>Deadline</p>
<p>A type of occurrence in the calendar feed that is similar in nature to an appointment. The difference is that one participant has not necessarily agreed to the time, which is mostly exclusively determined by the organizer. It is a strict requirement that this deadline must be met by the participant to participate. The purpose of deadlines are usually to ensure that goals are met on time, or that work is done in a timely manner.</p>
</li>
<li><p>Gym</p>
<p>This is a type of occurrence where one neither produces nor consumes. The purpose of the occurrence is to engage in an activity, either alone or with others, and fulfill one&#39;s bodily duty. The purpose of such occurrences is to improve, or maintain, the participants physical capacity. Participation is entirely voluntary.</p>
</li>
</ul>
</li>
</ul>
<h3 id="how-is-the-situation-now">How is the situation now?</h3>
<p>I&#39;m not sure how it is in your city, but Bergen is not particularly organized when it comes to finding various occurrences happening around the city. There isn&#39;t one central place where you can fetch all  happening. The different types of occurrences are spread out across quite a few different websites.</p>
<p>Most organizers have their own website where they post event details, but not all. Those who don&#39;t have their own website simply post event information on Facebook 🤢 or Instagram 🤮, which is not ideal for everyone.</p>
<p>Organizers use services like <a href="https://www.ticketmaster.no/city/bergen/40500">Ticketmaster for Bergen</a>, <a href="https://ticketco.events/no/nb/m?pattern=bergen">TicketCo for Bergen</a>, <a href="https://www.bergenlive.no/konsertkalender/">Bergenlive</a>, <a href="https://www.studentbergen.no/studentkalender">Studentbergen</a>, and <a href="https://www.meetup.com/find/?source=EVENTS&eventType=inPerson&sortField=DATETIME&location=no--Bergen">Meetup for Bergen</a> to post event information in addition to their own site. <a href="https://kvarteret.no/events">Det Akademiske Kvarter</a> is an example of a site that extracts a list of all events from Studentbergen, where the venue is Det Akademiske Kvarter.</p>
<p>There is some overlap between a couple of the websites, but not entirely. So if you want to find all relevant occurrences, you have to go from website to website and search. Currently, none of the websites mentioned above, except for TicketCo, provide a way to download an <code>.ics</code> file that I can import into my calendar. Most importantly, they don&#39;t provide a way to subscribe to occurrences.</p>
<p><strong>Note</strong>: <em>I&#39;ve included a list of all relevant organizers I found under <a href="#organizers">organizers</a>.</em></p>
<h3 id="what39s-the-problem">What&#39;s the problem?</h3>
<p>The problem mainly lies in the fact that it&#39;s time-consuming and cumbersome to find the occurrences I want to attend. For each event I find, I have to manually create a new event in my own calendar, enter the title, description, select start time, select end time, choose the right calendar label, save, move to the next event, and repeat.</p>
<p>Snork and double snork! Is it really expected of me in the upcoming AI era to do anything manually?</p>
<p>Not only that, but it&#39;s 100% possible for an organizer to change the details of an event after I&#39;ve manually entered it into my calendar. So I have to regularly verify afterwards that nothing has changed in the event.</p>
<p>For example, at <a href="https://bergenbibliotek.no/">Bergens Offentlige Bibliotek</a> (BOB), they were showing a movie called <a href="https://www.imdb.com/title/tt0050976/">The Seventh Seal</a>. So I manually entered it into my calendar a couple of weeks in advance because I wanted the event to be part of a list of occurrences, or calendar feed. This is just because I want to keep my calendar organized. If BOB had changed the time of the movie screening to a different time, it would have been incredibly inconvenient for me to show up at the originally planned time.</p>
<p>I&#39;m not just thinking about myself here, but also everyone else. Considering how incredibly cumbersome this is, it&#39;s no wonder there&#39;s little attendance at local occurrences. Since there&#39;s no centralized platform for this, it&#39;s also not surprising that nobody knows about the occurrences. Nobody finds them. The visibility of occurrences generally follows this philosophy: &quot;You&#39;ll find out about it if someone you know knows about it.&quot; This is quite inefficient. Furthermore, nobody bothers to write them into their calendar because it&#39;s too cumbersome, and usually because they can&#39;t see who else they know is going.</p>
<h3 id="how-can-it-be-improved">How can it be improved?</h3>
<p>The best solution to this problem, in my opinion, is to subscribe to calendar feeds. To generate an <code>.ics</code> URL that can be fetched by website visitors. These website visitors can then input the URL into their own calendar. Then, their calendar will be regularly and automatically synchronized with the organizer&#39;s calendar feed. Any changes to the event made by the organizer will be updated in the website visitors&#39; calendar. All without the website visitor having to think about it or do anything.</p>
<p>I envision two solutions to this problem.</p>
<ol>
<li><p><strong>We normalize subscribing to others&#39; calendars.</strong></p>
<p>Each organizer, or &quot;event market,&quot; creates their own technical solution that generates an <code>.ics</code> URL. A good solution to this is Echo&#39;s: <a href="https://echo.uib.no/for-studenter/arrangementer?view=week">https://echo.uib.no/for-studenter/arrangementer?view=week</a>. Here, you can &quot;build&quot; your own <code>.ics</code> URL by checking off which types of occurrences you want to keep track of. Then you get a constantly updating list of occurrences. This is relatively better than listening to a URL for each individual event, as my personal calendar suddenly becomes quite cluttered. A semi-poor solution is <a href="https://bergenbibliotek.no/arrangement/alle-arrangementer">BOB&#39;s</a> solution. It gives me all occurrences, but I can&#39;t filter out those that are absolutely irrelevant to me.</p>
</li>
<li><p><strong>One calendar service is created for all organizers.</strong></p>
<p>Meetup is an incredibly good idea but is not quite optimal for solving this problem. Only a few organizers post occurrences there, and it&#39;s not possible to subscribe to occurrences. You&#39;ll see the event in your meetup calendar if you have an account, but only if you click that you&#39;re going. Personally, I would prefer a free alternative where you don&#39;t have to register, which all organizers used. At the same time, all occurrences should be visible, regardless of whether you click &quot;going&quot; or not.</p>
</li>
</ol>
<h3 id="conclusion">Conclusion</h3>
<p>All organizations, as well as other entities, that organize and host occurrences should create a way for people to subscribe to a calendar feed that provides them with an overview of all upcoming occurrences. Alternatively, a way to import an <code>.ics</code> file if it&#39;s just a single event. This way, as an organizer, you ensure that everyone subscribing to that calendar feed gets all the information they need, as well as any ongoing changes made to the event leading up to the event date.</p>
<p>The ideal solution I envision meets the following requirements:</p>
<ul>
<li>There is a URL that I can import into my own calendar, which my calendar &quot;listens to&quot;.</li>
<li>I can filter out the types of occurrences I don&#39;t want to keep track of.</li>
<li>For each event, I can download an <code>.ics</code> file that only concerns that event.</li>
<li>If possible, I should be able to see who is going to the event. Of course, assuming consent from all subscribers.</li>
</ul>
<p>This small implementation on your website, whoever you are as an organizer, I believe can drastically improve attendance at your occurrences.</p>
<h3 id="organizers">Organizers</h3>
<p>Below you&#39;ll find a rough, clumsy list of organizers I found in Bergen.</p>
<h4 id="general">General</h4>
<ul>
<li><p><a href="https://www.bergenlive.no/konsertkalender/">Bergenlive.no</a></p>
</li>
<li><p><a href="https://www.ticketmaster.no/city/bergen/40500">Ticketmaster for Bergen</a></p>
</li>
<li><p><a href="https://ticketco.events/no/nb/m?pattern=bergen">TicketCo for Bergen</a></p>
</li>
<li><p><a href="https://www.kodebergen.no/kalender">Kode (Art Museum)</a></p>
</li>
<li><p><a href="https://usf.no/program/">USF Verftet</a></p>
</li>
<li><p><a href="https://landmark.ticketco.events/no/nb/">Landmark</a></p>
</li>
<li><p><a href="https://bergenbibliotek.no/arrangement">Bergen Offentlige Bibliotek</a></p>
</li>
</ul>
<h4 id="students">Students</h4>
<ul>
<li><p><a href="https://www.kulturhusetibergen.no/program/">Kulturhuset i Bergen</a></p>
</li>
<li><p><a href="https://kvarteret.no/events">Det Akademiske Kvarter</a></p>
</li>
<li><p><a href="https://asfbergen.no/hva-skjer/">Aktive Studenter Bergen</a></p>
</li>
<li><p><a href="https://rf.uib.no/">Bergen Realistforening</a></p>
</li>
<li><p><a href="https://echo.uib.no/for-studenter/arrangementer">Echo</a></p>
</li>
<li><p><a href="https://www.uib.no/infomedia/38184/enter-studentforeningen-ved-infomedia">Enter</a></p>
</li>
</ul>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Applying NLP Techniques to TC39's meeting notes]]></title>
            <link>https://test.cengelsen.no/en/blog/applying-nlp-techniques-to-tc39s-meeting-notes</link>
            <guid isPermaLink="false">https://test.cengelsen.no/en/blog/applying-nlp-techniques-to-tc39s-meeting-notes</guid>
            <pubDate>Fri, 15 Dec 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[This report details a coding project aimed at leveraging Natural Language Processing (NLP) techniques to extract and analyze information from TC39 meeting notes, a crucial aspect of the ECMAScript standardization process]]></description>
            <content:encoded><![CDATA[<h2 id="abstract">Abstract</h2>
<p>This report details a coding project aimed at leveraging Natural Language<br />Processing (NLP) techniques to extract and analyze information from TC<br />meeting notes, a crucial aspect of the ECMAScript standardization process.<br />The background provides context on ECMAScript, TC39, and the significance<br />of meeting notes in shaping web development. The implementation outlines<br />the meticulous process of text extraction and the application of various<br />NLP techniques, from sentiment analysis to semantic role labeling. The<br />resulting data structure in JSON format offers a clear representation of the<br />extracted information, while a sentiment graph visually depicts emotional<br />dynamics within proposals. The project aligns with a broader goal of enhancing<br />transparency and collaboration within the ECMAScript standardization<br />process, empowering developers with nuanced insights into language changes<br />and committee discussions.</p>
<h2 id="introduction">Introduction</h2>
<p>The ECMAScript standard stands as a foundational element, providing the<br />guidelines upon which the scripting language JavaScript is built. The continuous<br />improvement of ECMAScript is steered by the TC39 committee, a vital body<br />within Ecma International. This committee, comprised of representatives from<br />diverse organizations, plays a pivotal role in shaping the standard, ensuring its<br />consistency and adaptability across various implementations.</p>
<p>The focus of this coding project is to explore Natural Language Processing<br />(NLP) techniques for extracting and analyzing information embedded within<br />the meeting notes of the TC39 committee. The meeting notes serve as a<br />comprehensive record of discussions, decisions, and proposals, reflecting the<br />dynamic nature of the ECMAScript standard evolution.</p>
<p>This report details the background of ECMAScript and the role of the TC<br />committee, emphasizing the significance of their meeting notes in tracking the<br />development of the language. Subsequently, it delves into some NLP techniques,<br />ranging from sentiment analysis to semantic role labeling, highlighting their<br />relevance in understanding the nuances of textual content.</p>
<p>The implementation section provides a step-by-step walkthrough of the script’s<br />design, explaining how text extraction, timestamp identification, proposal<br />section recognition, and utterance processing are performed. The utilization of<br />frameworks such as TextBlob, Universal Sentence Encoder, Yake, and others<br />is detailed, showcasing a first draft approach to information extraction and<br />analysis.</p>
<p>Visual representation in the form of sentiment graphs enhances the<br />interpretability of the data, allowing for a deeper understanding of sentiment<br />dynamics within each proposal. Additionally, the JSON format output provides<br />a structured and readable overview of the processed data, facilitating further<br />analysis or sharing of results.</p>
<p>Related work is discussed, introducing NLP libraries like Stanza and Spacy,<br />along with the Hugging Face Sentence Transformer model. These resources serve as benchmarks and alternatives, highlighting the diversity of tools available in<br />the NLP landscape.</p>
<p>This project aims to create a tool that applies NLP techniques on TC39’s<br />meeting notes. It has the goal of extracting valuable information from the<br />corpus that can provide deeper understanding about how a proposal is discussed,<br />participants’ attitudes towards it and how this changes over time.</p>
<h2 id="background">Background</h2>
<p><strong>ECMAScript</strong></p>
<p>ECMA-262 is a scripting language specification that serves as the standard<br />upon which JavaScript is based. It is developed and maintained by Ecma<br />International, a standards organization. ECMAS-262 provides the rules and<br />guidelines that a scripting language must follow to be considered ECMAScript-<br />compliant.</p>
<p>JavaScript is the most well-known implementation of ECMAScript, but other<br />languages like JScript and ActionScript also adhere to the ECMAScript<br />standard. The goal of ECMA-262 is to standardize the scripting language<br />to ensure interoperability and consistency across different web browsers and<br />environments.</p>
<p>The ECMAScript specification evolves over time, with new features and<br />improvements being added to meet the demands of developers and the evolving<br />landscape of web development. Each version of ECMAScript introduces new<br />features, enhancements, and bug fixes. Developers often refer to the different<br />versions of ECMAScript by their edition number, such as ECMAScript 6 (ES6)<br />or ECMAScript 2015, which brought significant enhancements to the language.<br />Subsequent editions, like ECMAScript 2016, ECMAScript 2017, and so on,<br />have continued to build upon the standard. With the most recent one at the<br />time of writing being the 14th Edition, ECMAScript 2023.</p>
<p><strong>TC39 committee</strong></p>
<p>The TC39 (Technical Committee 39) is a committee within Ecma International<br />responsible for the standardization of the ECMAScript programming language.<br />The primary goal of TC39 is to develop, maintain, and evolve the ECMAScript<br />standard.</p>
<p>TC39 is composed of representatives from various organizations, including<br />browser vendors, language designers, interested parties from the software<br />development community, and academia. The committee collaborates to propose<br />and discuss new features, improvements, and changes to ECMAScript.</p>
<p>The process of introducing a new feature or modifying an existing one typically<br />involves several stages within TC39:</p>
<ol>
<li><p><strong>Stage 0:</strong> An initial idea or proposal is presented as a strawman. This is an<br /> informal stage to get feedback and initial thoughts from the committee.</p>
</li>
<li><p><strong>Stage 1:</strong> The proposal is formalized, and its high-level design and<br /> motivation are presented to the committee. If accepted, it moves to the<br /> next stage.</p>
</li>
<li><p><strong>Stage 2:</strong> The proposal is further refined, and a preliminary specification<br /> is created. This stage involves more detailed discussions and collaboration<br /> on the proposed feature.</p>
</li>
<li><p><strong>Stage 3:</strong> The proposal is considered feature-complete, and a complete<br /> specification is provided. At this stage, it is ready for initial testing and<br /> feedback from implementers.</p>
</li>
<li><p><strong>Stage 4:</strong> The proposal has received feedback, has been tested, and is<br /> ready to be included in the ECMAScript standard. Once the committee<br /> reaches consensus, the feature is added to the standard.</p>
</li>
</ol>
<p>The TC39 committee plays a crucial role in the ongoing development and<br />improvement of ECMAScript, ensuring that the language evolves to meet<br />the needs of developers and the changing landscape of web development.<br />The committee’s work has a direct impact on the features and capabilities<br />available to developers when writing JavaScript or other languages based on<br />ECMAScript.</p>
<p><strong>Meeting Notes</strong></p>
<p>The TC39 meeting notes are documents that summarize the discussions,<br />decisions, and outcomes of the committee’s meetings. These notes provide a<br />detailed record of what was discussed during a particular meeting, including<br />proposed language features, changes to the ECMAScript standard, and any<br />other relevant topics.</p>
<p>Here are some key points about these meeting notes:</p>
<ol>
<li><p><strong>Agenda and Topics:</strong> Meeting notes typically include an agenda that<br /> outlines the topics to be discussed during the meeting. This could include<br /> specific proposals for new language features, updates on existing proposals,<br /> discussions about language design principles, and more.</p>
</li>
<li><p><strong>Attendees:</strong> The notes often list the participants who attended the<br /> meeting, including representatives from various organizations, language<br /> designers, and interested parties. This provides transparency about who<br /> is contributing to the discussions.</p>
</li>
<li><p><strong>Discussion and Decisions:</strong> For each agenda item, the notes summarize<br /> the discussions that took place. This includes the viewpoints expressed by<br /> different participants, potential concerns, and any decisions or outcomes<br /> reached by the committee. It provides insight into the reasoning behind<br /> the decisions made during the meeting.</p>
</li>
<li><p><strong>Proposal Updates:</strong> If there are updates on specific language proposals<br /> (features being considered for inclusion in ECMAScript), the meeting notes will highlight these updates. This could include advancements to a higher stage in the proposal process or changes based on feedback received.</p>
</li>
<li><p><strong>Actions and Next Steps:</strong> The notes often include action items and next<br /> steps that arise from the discussions. These could involve further research,<br /> addressing concerns, or preparing materials for the next meeting.</p>
</li>
<li><p><strong>Links to Materials:</strong> Meeting notes may include links to additional<br /> materials, such as presentation slides, documents, or external references<br /> that were discussed during the meeting.</p>
</li>
</ol>
<p>By reviewing these meeting notes, developers, implementers, and other<br />interested parties can stay informed about the ongoing work of the TC<br />committee. It allows the broader community to understand the rationale behind<br />language changes, track the progress of specific proposals, and provide feedback<br />on the evolving ECMAScript standard.</p>
<p><strong>Markdown Files</strong><br />In the GitHub repository, at the root level of the GitHub repository, there may<br />be various files and folders related to the TC39 project. Among them, there is<br />a designated folder where meeting notes are stored, called “meetings”.</p>
<p>This folder contains folders representing each month of each year where there<br />has been a meeting, meaning one folder for every two months since May of 2012.<br />In each of these folders are the meeting notes, as well as other relevant files for<br />the meeting, liketoc.mdandsummary.md.</p>
<p>However, the only relevant files for this project, is the meeting notes themselves.</p>
<p><strong>Formatting of the meeting notes</strong><br />The meeting notes are formatted in such a way that each utterance can be tied<br />to a specific person. In this way, what each person has to contribute to the<br />current proposal is easily distinguishable from the other people involved in the<br />meeting. It can be broken down in the following way.</p>
<ol>
<li><p><strong>Speaker’s Acronym:</strong> The three-letter acronym at the beginning of each<br /> line represents the identifier of the person speaking. These acronyms are<br /> usually unique to each participant and are used consistently throughout<br /> the meeting notes.</p>
</li>
<li><p><strong>Colon (:) Separator:</strong> The colon serves as a separator between the<br /> speaker’s acronym and the content of their utterance. It visually<br /> distinguishes the speaker from their comment.</p>
</li>
<li><p><strong>Utterance Content:</strong> Following the colon, the actual content of what the<br /> person is saying is presented. This is the substance of the participant’s<br /> contribution to the discussion, and it could include statements, questions,<br /> proposals, concerns, or any other relevant remarks.</p>
</li>
</ol>
<p>Here’s an example fromfeb-01.mdin the2023-01-folder:</p>
<pre><code>ABC: Just a note to SYG to follow up with offline and to everyone interested in implementing this and trying implementation...

DEF: Ephemeron collection.

ABC: Thank you. I was trying to remember the word. By doing the transpose thing, the case that needs to be cheap becomes cheap.

DEF: So there are a couple different implementation strategies. Trade off, the big O notation of the run, the get, or the wrap. (...)
</code></pre>
<p>In this example, <strong>ABC</strong> and <strong>DEF</strong> are three-letter acronyms representing different<br />participants. After the colon, each line presents the content of the participant’s<br />utterance or comment.</p>
<h2 id="an-overview-of-nlp-techniques">An Overview of NLP techniques</h2>
<ol>
<li><p><strong>Sentiment Analysis</strong> is a natural language processing technique designed<br /> to discern and quantify the emotional tone expressed in a piece of text,<br /> typically categorized as positive, negative, or neutral sentiment. This<br /> process involves the use of machine learning algorithms to analyze<br /> words and phrases within a context, considering linguistic nuances and<br /> variations. Sentiment analysis is particularly valuable in the business<br /> realm for gauging customer satisfaction through reviews and social media<br /> comments. Additionally, it aids in monitoring public sentiment towards<br /> products, services, or brands, helping organizations make informed<br /> decisions based on the prevailing attitudes within the target audience.<br /> (Devopedia. 2022.)</p>
</li>
<li><p><strong>Named Entity Recognition (NER)</strong> is a crucial component of<br /> information extraction in natural language processing. It involves<br /> identifying and classifying entities such as names of people, organizations,<br /> locations, dates, and other specific terms within a given text. NER<br /> systems employ machine learning algorithms that are trained on<br /> annotated datasets to accurately locate and categorize these entities.<br /> Applications of NER range from extracting structured information from<br /> unstructured text, improving search engine capabilities, to facilitating<br /> question-answering systems by identifying key entities within a document.<br /> (Devopedia. 2020.)</p>
</li>
<li><p><strong>Semantic Role Labeling (SRL)</strong> is a semantic parsing task that<br /> focuses on understanding the relationships between different elements in a<br /> sentence by assigning specific roles to words or phrases, such as identifying<br /> the agent, patient, or beneficiary in a given action. This technique goes<br /> beyond traditional syntactic parsing to capture the deeper meaning<br /> and roles of each component within a sentence. SRL is instrumental in<br /> tasks requiring a nuanced understanding of natural language, including<br /> machine translation, question answering, and sentiment analysis, where<br /> discerning the roles of entities is crucial for accurate interpretation.<br /> (Devopedia. 2020.)</p>
</li>
<li><p><strong>Part of Speech Tagging</strong> POS tagging, assigning part-of-speech tags<br /> to words, tackles ambiguity in natural language processing by resolving<br /> multiple meanings based on context. Originally linguistic, POS taggers<br /> transitioned to a statistical approach with models achieving over 97%<br /> accuracy. This pre-processing step is fundamental in NLP, supporting<br /> applications such as information retrieval, named entity recognition, and<br /> text-to-speech systems.(Devopedia. 2019.)</p>
</li>
<li><p><strong>Text Summarization</strong> is a text processing technique that aims to distill<br /> the essential information from a document while preserving its core<br /> meaning. There are two main types of summarization: <em>extractive</em> , which<br /> selects and combines existing sentences, and <em>abstractive</em> , which generates<br /> new sentences to convey the summarized content. Summarization finds<br /> applications in news articles, research papers, and document management,<br /> providing a concise overview of lengthy texts and aiding in information<br /> retrieval and decision-making processes. (Devopedia. 2020)</p>
</li>
<li><p><strong>Semantic Similarity</strong> quantifies the likeness between two pieces of text<br /> based on their meaning rather than relying solely on lexical or syntactic<br /> similarity. These measures take into account the context, semantics, and<br /> relationships between words, enabling a more nuanced understanding of<br /> similarity. Semantic similarity is applied in various NLP tasks, including<br /> duplicate detection, document clustering, and recommendation systems.<br /> By capturing the underlying meaning of text, semantic similarity enhances<br /> the accuracy and relevance of systems that require matching or grouping<br /> textual information. (Harispe et al., 2015)</p>
</li>
<li><p><strong>Keyword Extration</strong> involves identifying and extracting the most<br /> relevant and significant words or phrases from a given text. This process<br /> helps to distill the key themes, topics, or concepts within a document,<br /> enabling a more concise representation of its content. NLP algorithms<br /> use various techniques, such as statistical analysis, natural language<br /> processing, and machine learning, to determine the importance of words<br /> based on their frequency, context, and relationships within the text.<br /> Ultimately, keyword extraction aids in summarizing and understanding<br /> the essential information contained in a body of text. (Beliga et al., 2015)</p>
</li>
</ol>
<h2 id="implementation">Implementation</h2>
<p>In this project, i applied sentiment analysis, semantic similarity and keyword<br />extraction on the meeting notes corpus. To extract relevant data and produce<br />plots from the meeting notes, this seemed sufficient.</p>
<p>For this entire project, i’ve chosen to use python for the implementation of the<br />techniques. This is mainly due to the vast amount of libraries available, but also<br />because of the increased readability by most developers. In addition, it is also<br />the language i’m the most confident in.</p>
<p>You can find the code for the implementation in my Github repository, which<br />is listed in the references section. (Engelsen, 2023).</p>
<p><strong>Text extraction</strong></p>
<ol>
<li><p><strong>File Selection:</strong> The script uses the globmodule, which is a python<br /> module that can be applied to identify path names that fits a specified<br /> pattern, to identify markdown files within the specified directory,<br /> excluding certain files like “toc.md” or “summary.md.” This ensures that<br /> only relevant files are considered for processing, narrowing down the<br /> scope of analysis.</p>
</li>
<li><p><strong>Reading Markdown Files:</strong> For each markdown file, the script opens<br /> and reads its content by using python’s builtinopenfunction and read<br /> method. The custom process_markdown_filefunction encapsulates this<br /> operation, facilitating the extraction of text from individual markdown<br /> files.</p>
</li>
<li><p><strong>Timestamp Extraction:</strong> The code extracts timestamps from markdown<br /> file names by employing regular expressions to recognize patterns like<br /> month-day.md. It maps these patterns to corresponding month numbers<br /> and the current year, generating accurate timestamps for each proposal<br /> section.</p>
</li>
<li><p><strong>Proposal Section Extraction:</strong> Within each markdown file, the<br /> script identifies proposal sections by applying a regular expression<br /> (proposal_section_pattern). The re.findall function is used to<br /> extract titles of these proposal sections, forming a list of titles for<br /> subsequent processing.</p>
</li>
<li><p><strong>Utterance and Text Extraction:</strong> For each proposal section, the script<br /> iterates through its titles, extracting the corresponding text. It filters out<br /> irrelevant sections based on predefined criteria using theisDumbTitle<br /> function. Relevant text is then extracted by slicing the content based on<br /> the position of the title in the markdown text.</p>
</li>
<li><p><strong>Text Cleaning:</strong> Extracted proposal text undergoes cleaning through<br /> regular expressions, removing undesirable information such as presenter<br /> details and slide references. Patterns like presenter names and slide<br /> references are identified and eliminated using the re.sub function,<br /> ensuring the text is focused on the core content.</p>
</li>
<li><p><strong>Utterance and Sentence Processing:</strong> The script processes the<br /> proposal text by splitting it into utterances using a regular expression<br /> (utterance_pattern) to identify speaker contributions. Each utterance<br /> is further divided into sentences using there.splitfunction. Sentences<br /> are processed individually, and keyword extraction is performed using the<br /> Yake library.</p>
</li>
</ol>
<p><strong>Proposal Dictionary Object</strong></p>
<p>The proposal dictionary encapsulates essential information about a specific<br />proposal extracted from the meeting notes. Here is an explanation of properties<br />of the object</p>
<ul>
<li><p>title: This field stores the title of the proposal, providing a concise<br />  identifier for the proposal’s subject matter.</p>
</li>
<li><p>timestamp: Represents the timestamp associated with the proposal,<br />  typically extracted from the markdown file name.</p>
</li>
<li><p>utterances: This is a list containing individualutterancedictionary<br />  objects. Each utterance corresponds to a section of the proposal where a<br />  distinct speaker contributes.</p>
</li>
<li><p>full text: The entire content of the proposal is stored here, facilitating<br />  comprehensive analysis and comparisons.</p>
</li>
</ul>
<p><strong>Utterance Dictionary Object</strong></p>
<p>Theutteranceobject represents a speaker’s contribution to the discussion<br />which contains the utterance. Here is an explanation of properties of the object:</p>
<ul>
<li><p>utterance_number: The chronological number of the utterance within the<br />  proposal.</p>
</li>
<li><p>timestamp: Carries the timestamp associated with the proposal. This will<br />  correspond to the date of the meeting, which is a date translated from the<br />  name of the markdown file.</p>
</li>
<li><p>sentences: This list contains the list of individualsentencedictionary<br />  objects, each representing a sentence within the utterance.</p>
</li>
<li><p>polarity: Represents the overall sentiment polarity of the entire<br />  utterance.</p>
</li>
<li><p>subjectivity: Reflects the subjectivity of the utterance as a whole.</p>
</li>
<li><p>keywords: Stores keywords extracted from the utterance using the Yake<br />  library, providing insights into the main topics.</p>
</li>
</ul>
<p><strong>Sentence Dictionary Object</strong></p>
<p>Thesentenceobject represents an individual sentence within an utterance.<br />Here is an explanation of properties of the object</p>
<ul>
<li><p>sentence_number: A unique identifier for each sentence within an<br />  utterance.</p>
</li>
<li><p>text: a string containing the contents of the individual sentence.</p>
</li>
<li><p>polarity: The sentiment polarity of the sentence.</p>
</li>
<li><p>subjectivity: The subjectivity of the sentence.</p>
</li>
</ul>
<p><strong>Frameworks Used</strong></p>
<ul>
<li><p><strong>TextBlob</strong> TextBlob is a library that simplifies common natural language<br />  processing tasks. In this script, it is employed for basic sentiment analysis,<br />  allowing the determination of the sentiment polarity and subjectivity of<br />  both sentences and entire utterances. (Loria, 2020)</p>
</li>
<li><p><strong>Universal Sentence Encoder</strong> TensorFlow’s Universal Sentence Encoder<br />  (USE) is a pre-trained model that converts text into high-dimensional<br />  vectors. This script uses USE to generate embeddings for sentences,<br />  enabling the calculation of semantic similarity between different texts.<br />  (Cer et al., 2018)</p>
</li>
<li><p><strong>Yake</strong> Yake is a keyword extraction library that identifies significant<br />  keywords within a given piece of text. In this script, Yake is utilized to<br />  extract keywords from each utterance, aiding in the understanding of the<br />  main topics discussed. (Campos et al., 2020)</p>
</li>
<li><p><strong>Matplotlib</strong> Matplotlib is a plotting library in Python. In this script, it is<br />  used to create sentiment analysis plots. These plots visually represent how<br />  sentiment changes over the course of utterances in a proposal. (Hunter,</p>
<ol start="2007">
<li></li>
</ol>
</li>
<li><p><strong>Regular Expression</strong> Regular expressions are applied for pattern<br />  matching and extraction. In this context, they help identify specific<br />  sections of markdown files and clean proposal texts by removing<br />  irrelevant information such as presenter details and slides. (Python<br />  Software Foundation, 2023)</p>
</li>
<li><p><strong>TensorFlow</strong> TensorFlow is an open-source machine learning framework,<br />  and in this script, it is used to load and leverage a pre-trained model<br />  for encoding sentences into meaningful vectors. In this code, it is utilized<br />  to load and utilize the Universal Sentence Encoder model. (Abadi et al.,</p>
<ol start="2015">
<li></li>
</ol>
</li>
</ul>
<p><strong>Sentiment Graph</strong> The sentiment graph is generated using Matplotlib and<br />serves to visually depict the sentiment dynamics within a proposal. Here is an<br />explanation of the graph-properties.</p>
<ul>
<li><p><strong>X-axis:</strong> Represents individual utterances within the proposal.</p>
</li>
<li><p><strong>Y-axis:</strong> Depicts the sentiment polarity, showcasing shifts in sentiment<br />  from positive to negative between -1.0 and 1.0. -1.0 means completely<br />  negative, 0 means completely neutral and 1.0 means completely positive.</p>
</li>
<li><p><strong>Highlights:</strong> Points on the graph highlight utterances with particularly<br />  high positive or negative sentiment, providing a quick overview of<br />  sentiment peaks and troughs. The utterances that is the cause of this<br />  peak, is written out in a JSON-file.</p>
</li>
</ul>
<p>For each proposal in the proposals-list, a sentiment graph is plotted to visualize<br />the sentiment for each utterance on that proposal, and how the sentiment might<br />change over the course of number of utterances.</p>
<p><strong>Proposal Data Structure in JSON Format</strong> At the conclusion of the<br />script, the data structure of each proposal is printed in JSON format. This<br />output provides a detailed view of the processed data, including titles,<br />timestamps, utterances, and full text. JSON format is chosen for its readability<br />and ease of inspection, making it convenient for further analysis or sharing of<br />results.</p>
<p><img src="https://test.cengelsen.no/images/inf319rapport/Decorator_object.webp" alt="Figur 1: Example of a sentiment graph of a proposal object." /></p>
<h2 id="related-work">Related work</h2>
<p><strong>Stanford NLP (Stanza)</strong></p>
<p>Stanford NLP, now known as Stanza, is a robust natural language processing<br />library developed by the Stanford NLP Group. It provides a suite of state-of-the-<br />art tools for various language processing tasks, including tokenization, part-of-<br />speech tagging, named entity recognition, and dependency parsing. Stanza offers<br />pre-trained models for multiple languages, enabling users to perform complex<br />linguistic analyses with ease. One of its key strengths lies in its deep integration<br />with deep learning techniques, resulting in high accuracy and efficiency across a<br />range of NLP tasks. Its focus on multilingual support makes it a versatile choice<br />for researchers and developers working with diverse linguistic datasets. (Peng<br />et al., 2020)</p>
<p><strong>Spacy NLP</strong></p>
<p>Spacy is a popular open-source natural language processing library designed<br />for efficiency and ease of use. It excels in providing fast and accurate linguistic<br />annotations, including tokenization, part-of-speech tagging, named entity<br />recognition, and dependency parsing. Spacy’s streamlined API and pre-trained<br />models make it user-friendly for both beginners and experienced developers. It<br />is known for its efficiency, allowing for real-time application in various contexts.<br />Spacy also supports custom model training, enabling users to adapt it to<br />domain-specific language patterns. Overall, Spacy is a versatile tool for NLP<br />tasks, striking a balance between performance and simplicity. (Honnibal et al.,<br />2020)</p>
<p><strong>Hugging Face Sentence Transformer (all-MiniLM-L6-v2)</strong></p>
<p>Hugging Face’s Sentence Transformer library, specifically the model “all-<br />MiniLM-L6-v2,” is a part of the broader Transformers library. It is developed<br />by Hugging Face, a platform that hosts a vast collection of pre-trained models<br />for natural language processing tasks. The Sentence Transformer model excels<br />in creating embeddings for sentences or text snippets, making it valuable for<br />tasks such as semantic similarity and information retrieval. “all-MiniLM-L6-v2”<br />refers to the specific architecture and version of the MiniLM model used in<br />this implementation. The Hugging Face Transformers library simplifies the<br />integration of advanced transformer models into various NLP applications,<br />fostering accessibility and innovation in the field. (Hugging Face, n.d)</p>
<h2 id="conclusion-amp-future-work">Conclusion &amp; Future Work</h2>
<p>The conclusion i can draw from this project, is that the best approach to<br />extracting usable data from the meeting notes, is to use an ensemble of different<br />NLP libraries and techniques. Only using a single pretrained model is too<br />inadequate for the purpose of this project.</p>
<p>The plots produced by the implementation in this project seem to all have a<br />positive bias. The average sentiment from all the plots, by qualitative measure,<br />can be estimated to be between 0.5 and 0. While there are some peaks in the<br />graphs in both negative and positive direction, the sentiment analysis estimates<br />most utterances to be either neutral or slightly positive.</p>
<p>It should be acknowledged, however, that the greatest challenge of this project<br />has been to find a way to accurately estimate whether two, or more, discussions<br />are talking about the same proposal. The implementation in this project is not<br />as nuanced as it could be, and as a result the utterances measured in each<br />proposal might not completely reflect the true evolution of sentiment of each<br />proposal.</p>
<p><strong>Runtime</strong></p>
<p>The natural next steps to improve this implementation would be to improve the<br />runtime. As of now, on an average desktop, it takes between 36 and 48 hours to<br />create sentiment graphs for all proposals mentioned in the meeting notes, dating<br />back to 2016. As the list of proposals in theproposals-list becomes longer, the<br />runtime increases, due to an accumulation of unique elements.</p>
<p>Initially, my thought was to concatenate the fulltext of each proposal that meets<br />the similarity threshold, to increase the accuracy of the semantic similarity<br />calculation. However, the script eventually stopped due to lack of available<br />RAM.</p>
<p>A potential avenue of investigation could be to create summaries of the full texts<br />of each proposal and concatenate those for accurate comparison.</p>
<p><strong>Unique identifier for each proposal</strong></p>
<p>The greatest challenge of this project was how to determine if two sections in the<br />meeting notes are actually discussing the same proposal. My thought was that if<br />the semantic similarity between the two sections are above a certain threshold,<br />they must talk about the same thing. However, this is not necessarily the case.<br />The following scenario might be the case. A unique proposal is discussed in 2017. Later, in 2019, a different unique proposal is discussed. The later proposal,</p>
<p>however, is completely dependent on the earlier proposal. The discussion of the<br />later proposal therefore contains a lot of references and discussions about the<br />earlier proposal. By only evaluating semantic similarity, these two sections would<br />be deemed part of the same proposal, which is not the case.</p>
<p>Because of this type of dilemma, an improved way of estimating similarity is<br />necessary to accurately determine if two sections are discussion pertaining to<br />the same proposal.</p>
<p>My first suggestion is to give each unique proposal discussed at the meetings,<br />a unique identifier, e.g. “AYD245”. Then every time the same proposal is<br />discussed, the same identifier is applied to the section in the meeting notes.<br />This way, the estimation is no longer dependent on meeting a threshold, but<br />rather verifying the unique identifier.</p>
<p>My second suggestion is to use an ensemble of different NLP techniques to create<br />a composite score of similarity. This way, there is more nuance involved in the<br />estimation of similarity.</p>
<h2 id="references">References</h2>
<ol>
<li><p>Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jozefowicz, R., Jia, Y., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Schuster, M., Monga, R., Moore, S., Murray, D., Olah, C., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., &amp; Zheng, X. (2015). TensorFlow, Large-scale machine learning on heterogeneous systems [Computer software]. <a href="https://doi.org/10.5281/zenodo.4724125">https://doi.org/10.5281/zenodo.4724125</a></p>
</li>
<li><p>Campos, R., Mangaravite, V., Pasquali, A., Jatowt, A., Jorge, A., Nunes, C. and Jatowt, A. (2020). YAKE! Keyword Extraction from Single Documents using Multiple Local Features. In Information Sciences Journal. Elsevier, Vol 509, pp 257-289. <a href="https://doi.org/10.1016/j.ins.2019.09.013">https://doi.org/10.1016/j.ins.2019.09.013</a></p>
</li>
<li><p>Cer, D., Yang, Y., Kong, S.-y., Hua, N., Limtiaco, N., St. John, R., Constant, N., Guajardo-Cespedes, M., Yuan, S., Tar, C., Sung, Y.-H., Strope, B., &amp; Kurzweil, R. (2018). Universal Sentence Encoder. arXiv preprint <a href="https://arxiv.org/pdf/1803.11175.pdf">https://arxiv.org/pdf/1803.11175.pdf</a></p>
</li>
<li><p>Devopedia. 2019. “Part-of-Speech Tagging.” Version 3, September 8. Accessed 2023-11-12. <a href="https://devopedia.org/part-of-speech-tagging">https://devopedia.org/part-of-speech-tagging</a></p>
</li>
<li><p>Devopedia. 2020. “Named Entity Recognition.” Version 5, February 4. Accessed 2023-11-12. <a href="https://devopedia.org/named-entity-recognition">https://devopedia.org/named-entity-recognition</a></p>
</li>
<li><p>Devopedia. 2020. “Semantic Role Labelling.” Version 3, January 10. Accessed 2023-11-12. <a href="https://devopedia.org/semantic-role-labelling">https://devopedia.org/semantic-role-labelling</a></p>
</li>
<li><p>Devopedia. 2020. “Text Summarization.” Version 2, February 21. Accessed 2023-11-12. <a href="https://devopedia.org/text-summarization">https://devopedia.org/text-summarization</a></p>
</li>
<li><p>Devopedia. 2022. “Sentiment Analysis.” Version 52, January 26. Accessed 2023-11-12. <a href="https://devopedia.org/sentiment-analysis">https://devopedia.org/sentiment-analysis</a></p>
</li>
<li><p>Engelsen, C. (2023). sentiment-plotter [Computer software]. <a href="https://github.com/Cengelsen/sentiment-plotter">https://github.com/Cengelsen/sentiment-plotter</a></p>
</li>
<li><p>Harispe, S., Ranwez, S., Janaqi, S., &amp; Montmain, J. (2015). Semantic Similarity from Natural Language and Ontology Analysis. Synthesis Lectures on Human Language Technologies. Springer International Publishing. <a href="https://doi.org/10.1007/978-3-031-02156-5">https://doi.org/10.1007/978-3-031-02156-5</a></p>
</li>
<li><p>Honnibal, M., Montani, I., Van Landeghem, S., &amp; Boyd, A. (2020). spaCy: Industrial-strength Natural Language Processing in Python. <a href="https://doi.org/10.5281/zenodo.1212303">https://doi.org/10.5281/zenodo.1212303</a></p>
</li>
<li><p>Hunter, J. D. (2007). Matplotlib: A 2D graphics environment. Computing in Science &amp; Engineering, 9(3), 90–95. <a href="https://doi.org/10.1109/MCSE.2007.55">https://doi.org/10.1109/MCSE.2007.55</a></p>
</li>
<li><p>Hugging Face. (n.d.). Sentence Transformers: MiniLM-L6-v2. Hugging Face Model Hub. Retrieved December 18, 2023, from <a href="https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2">https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2</a></p>
</li>
<li><p>Loria, S. (2020). TextBlob: Simplified Text Processing (Version 0.16.0). Retrieved from <a href="https://textblob.readthedocs.io/_/downloads/en/dev/pdf/">https://textblob.readthedocs.io/_/downloads/en/dev/pdf/</a></p>
</li>
<li><p>Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton and Christopher D. Manning. 2020. Stanza: A Python Natural Language Processing Toolkit for Many Human Languages. In Association for Computational Linguistics (ACL) System Demonstrations. 2020. <a href="https://nlp.stanford.edu/pubs/qi2020stanza.pdf">https://nlp.stanford.edu/pubs/qi2020stanza.pdf</a></p>
</li>
<li><p>Python Software Foundation. (2023). “re” - Regular expression operations. Python 3.11. Available at: <a href="https://docs.python.org/3/library/re.html">https://docs.python.org/3/library/re.html</a></p>
</li>
<li><p>Beliga, Slobodan; Ana, Meštrović; Martinčić-Ipšić, Sanda. (2015). “An Overview of Graph-Based Keyword Extraction Methods and Approaches”. Journal of Information and Organizational Sciences. 39 (1): 1–20. <a href="https://hrcak.srce.hr/file/207669">https://hrcak.srce.hr/file/207669</a></p>
</li>
</ol>
<h2 id="appendix-a-sentiment-graphs-for-proposals">Appendix A. Sentiment graphs for proposals</h2>
<p>Here are some further example graphs produced by our implementation.</p>
<p><img src="https://test.cengelsen.no/images/inf319rapport/Async_Context_16.webp" alt="Example 1" /><br /><img src="https://test.cengelsen.no/images/inf319rapport/Decorator_export_ordering_3.webp" alt="Example 2" /><br /><img src="https://test.cengelsen.no/images/inf319rapport/Intl_era_and_monthCode_for_Stage_2_2.webp" alt="Example 3" /><br /><img src="https://test.cengelsen.no/images/inf319rapport/Intl_Locale_Info_API_Stage_3_update_9.webp" alt="Example 4" /><br /><img src="https://test.cengelsen.no/images/inf319rapport/Type_Annotations_Proposal_Update_14.webp" alt="Example 5" /></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Examination Of The Ethical Use Of AI In Social Media]]></title>
            <link>https://test.cengelsen.no/en/blog/examination-of-the-ethical-use-of-ai-in-social-media</link>
            <guid isPermaLink="false">https://test.cengelsen.no/en/blog/examination-of-the-ethical-use-of-ai-in-social-media</guid>
            <pubDate>Mon, 16 Oct 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Exploratory analysis of social media's use of AI and its impact on psychological, cognitive and ethical challenges.]]></description>
            <content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>In the ever-expanding digital universe, we find ourselves navigating the complex terrain of social media. From image-sharing, to blog posting and thought- provoking discussions, social media platforms have become a dominant virtual town square. However, behind the scenes, a plethora of algorithms and artificial intelligence (AI) systems are working tirelessly to shape our online experience on these platforms. With this in mind, should the use of AI by social media be banned?</p>
<p>In this text, I will delve into the psychological and cognitive effects that social media inflicts on its users and make a case for why the deployment of AI on these platforms should either be banned outright or subjected to rigorous regulation. Moreover, I will examine how this relates to being an AI fairness problem.</p>
<h3 id="discussion">Discussion</h3>
<p>First, i will start with a definition of AI, followed by a definition of social media. </p>
<p>AI can be defined as “the capability of computer systems or algorithms to imitate human behavior.”(merriam-webster, n.d). To imitate human behavior, the computer system can simulate parts of human intelligence or cognitive skills to perform tasks such as problem-solving, learning, reasoning, perception, and language understanding. Typically, AI systems use algorithms, data, and machine learning to make autonomous decisions and adapt to changing situations.</p>
<p>Social media refers to online platforms and websites that enable users to create, share and interact with content and other users. Users can post text, images, videos, and engage with others through comments and likes. Social media facilitates communication and networking on a global scale, making it a prominent aspect of modern digital culture. (Merriam-Webster, n.d.) </p>
<p>With this is mind, and putting the concept of AI aside for a minute, there has been some research on the negative psychological and cognitive effects social media has on its users. The following findings have been compiled by the Center for Humane Technology, in their “Ledger of Harms” (Center of Humane Technology, 2022). I will proceed to list a few of them.</p>
<ul>
<li><p>Cyberbullying significantly raises the risk of suicide ideation in children, making them three times more likely to contemplate suicide than their peers. The online bullying experience is particularly distressing, likely due to the victim’s awareness of a larger public audience (van Geel, Vedder, &amp; Tanilon, 2014).</p>
</li>
<li><p>Excessive screen-based media use in preschoolers, over an hour a day, hampers core brain regions responsible for language and literacy. As screen time increases, language skills decrease, and vital brain regions suffer structural integrity loss. This study highlights concerns about screen use on young children’s brain development (Hutton, Dudley, &amp; Horowitz-Kraus, 2019).</p>
</li>
<li><p>Prolonged screen time in early childhood leads to developmental delays in language, problem-solving, and social interaction, persisting for over a year. Excessive screen exposure during these formative years can significantly hinder a child’s optimal development (Madigan, Browne, Racine, &amp; Mori, 2019).</p>
</li>
<li><p>Increasing social media usage correlates with higher depression levels in teenagers. For every additional hour spent on social media, there’s a 2% increase in depressive symptoms (Boers, Afzali, Newton, &amp; Conrod, 2019).</p>
</li>
<li><p>Merely having a smartphone around diverts attention, even when off and face down. An experiment showed a drop in memory and problem-solving when phones were nearby but off. Surprisingly, phone-dependent people improved memory and intelligence when phones were in another room. Phones are “high-priority stimuli,” sapping attention, even when ignored (Ward, Duke, Gneezy, &amp; Bos, 2017).</p>
</li>
<li><p>Soon after starting smartphone use, mental math declines, attention weak- ens, and conformity rises. Brain scans show reduced activity in the right prefrontal cortex, seen in ADHD (Hadar, Hadas, Lazarovits, Alyagon, Eliraz, &amp; Zargen, 2017).</p>
</li>
<li><p>Memory favors social text over complex text. People recall comments on news more than the article or headline. They remember Facebook posts better than book sentences or faces (Mickes, Darby, Hwe, Bajic, Warker, Harris, &amp; Christenfeld, 2013).</p>
</li>
<li><p>Media channel switching harms working and long-term memory. The Extractive Attention Economy and many social platforms threaten human memory (Uncapher and Wagner, 2018).</p>
</li>
</ul>
<p>Keeping these documented effects in mind, i will proceed by trying to answer the initial question.</p>
<p>AI is used in differents facets of social media. However, the most relevant ones would be it’s use in recommendation algorithms, behaviour-based marketing and facial recognition. If we take into account the negative psychological and cognitive effects social media has on adolescents and adults alike, the use of AI in these mentioned features should not be taken lightly. To me, there seems to be three main aspects that could potentially be the reason for these negative effects.</p>
<ol>
<li>Using AI to keep people active on the platform longer</li>
<li>Using AI to create targeted ads based on the users activity and data</li>
<li>Using AI to distort people’s view of reality</li>
</ol>
<h3 id="using-ai-to-keep-people-active-on-the-platform-longer">Using AI to keep people active on the platform longer</h3>
<p>Recommendation algorithms, powered by AI, are arguably what drives person- alized content feeds. These algorithms can analyze user behavior, preferences, and engagement patterns to suggest posts, videos, or products that are likely to resonate with the user. This enhances user experience and keeps them engaged, driving user retention and platform usage (Fayyaz, et.al. 2020)</p>
<p>This, we could say is a case of unsupervised automated decision making. The AI is deciding what should show up in the users content feed without any human in the loop. A machine learning approach to figure out create a list of content to expose to the user, will result in a systems that only recommends content that algorithmically is seen as relevant to the user. I would say this is illegitimate, since there is no consideration if the user benefits from seeing it; only the decision that the user should see it, due to the level of “relevance”. </p>
<p>The content is seen as relevant because the AI estimates a higher chance for the user to interact with the content in some way. If the user is consistently shown content that the user wants to interact with, the user stays active longer on the platform. If the user stays active longer on the platform, it is exposed to more ads, which in turn increases the platform’s revenue.</p>
<p>In other words, these social media platforms are using AI to create a list of content that has a high probability for the user to interact with, preying on the user’s cognitive mechanisms, to optimize ad revenue. In my opinion, it’s an unethical practice to exploit people’s cognitive mechanisms and pitfalls for capital gain. Especially since these people are most likely unaware of what their cognitive mechanisms work, or even what they are. </p>
<h3 id="using-ai-to-create-targeted-ads-based-on-the-users-activity-and-data">Using AI to create targeted ads based on the users activity and data</h3>
<p>Behavioral advertising is the concept of tracking user interactions and preferences fot the purpose of delivering highly targeted advertisements (Boerman, S. C., et. al. 2017). This results in increased ad effectiveness and, in turn, better return of investment for businesses. Considering that the Norwegian Data Protection Authority has put in effect a temporary ban on behavioral advertising, which affects Metas business practices (Judin, 2023), i think it’s safe to assume Facebook and Instagram has adopted this practice.</p>
<p>The use of AI to track and analyze user behavior can border on invasive, as it delves into individuals’ online activities and personal preferences. The hyper-targeted content delivered through behavioral marketing can create echo chambers and reinforce biases, limiting exposure to diverse viewpoints and information. This unethical surveillance and tracking of individuals online behavior, can be seen as manipulation.</p>
<h3 id="using-ai-to-distort-peoples-view-of-others-and-themselves">Using AI to distort people’s view of others and themselves</h3>
<p>Facial recognition is used in social media for various things, but for the purpose of this text, i want to focus on “beauty filters”. By beauty filters, i mean filters that enhances a person’s appearance to look more conventionally attractive and/or smooth out imperfections.</p>
<p>A study by Ozimek, et. al., found that “(. . . ) a significant negative correlation was found between photo editing behaviour and self-perceived attractiveness in terms of appearance” (Ozimek, et. al. 2023. p.8). AI-powered beauty filters on, for example Instagram, facilitate a photo-editing behaviour.&quot; A concern the same study raised is that these AI-powered beauty filters might remove or gloss over features of their appearance that they might not deem unattractive themselves. The study also mentions that “Numerous studies indicated a positive correlation between self-perceived attractiveness and self-esteem (. . . )” (Ozimek, et. al. 2023. p.5).</p>
<p>In pursuit of adhering to societal beauty norms, these filters might encourage the homogenization of beauty, where individuality and unique features are diminished. This can lead to a potential loss of self-identity and a sense of disconnection from one’s authentic self. As a result, individuals might unknowingly become accustomed to an altered version of themselves, making it harder to distinguish between their real and filtered self.</p>
<h3 id="how-is-this-a-fairness-issue">How is this a fairness issue?</h3>
<p>The use of AI in social media raises fairness issues across its various applications. In recommendation algorithms, AI-driven personalization can enhance the user experience, but may inadvertently contribute to filter bubbles. These bubbles can isolate users within their existing beliefs and preferences, limiting exposure to diverse viewpoints and reinforcing biases. </p>
<p>Behavioral marketing, another facet of AI in social media, presents privacy concerns. While it optimizes ad effectiveness by delivering highly targeted content, it surveils user behavior and preferences. This raises ethical issues surrounding privacy rights and the potential for discriminatory practices. Hyper-targeted ads may lead to perpetuation of stereotypes, and perhaps even historical data.</p>
<p>Additionally, AI’s involvement in beauty filters and self-perception on social media can have far-reaching implications. These filters often promote conventional beauty standards, which can distort users’ self-image and prompt an increased focus on conforming to these ideals. This AI-driven distortion of self-perception can disproportionately impact individuals who do not fit within these beauty standards, potentially leading to feelings of inadequacy and lower self-esteem. As a result, these beauty filters might perpetuate societal biases as it pertains to appearance.</p>
<p>In essence, AI fairness issues in social media concern algorithmic biases, privacy concerns, and perpetuation of biases of appearance. Addressing these issues would require a careful balance of ethical AI practices, adequate regulation, and a more user-centered design to ensure that AI-driven systems promote a safe digital environment where social media users are not exploited for profit.</p>
<h3 id="conclusion">Conclusion</h3>
<p>The integration of AI into social media has ignited discussions surrounding its consequences and ethical dimensions. It plays a significant role in social media, with AI-powered content curation being one of its core components. The approach utilizes recommendation algorithms powered by AI to engage users by exposing personalized content. However, there are, of course, ethical concerns that arise regarding the potential exploitation of user data for financial gain.</p>
<p>This, in tandem with targeted advertising, employs behavioral marketing to track and analyze user behavior, delivering personalized ads. This enhances ad effectiveness but could also contribute to the creation of echo chambers, where users are exposed only to information that aligns with their already existing beliefs.</p>
<p>The influence of AI on self-image is another compelling dimension. AI-driven beauty filters can alter the users’ self-perceived attractiveness, in turn affecting self-esteem and encouraging conformity to conventional beauty standards.</p>
<p>The discussion about AI’s role in social media extends to fairness concerns,<br />including algorithmic biases, privacy issues, and the perpetuation of appearance-related biases. Finding a balance in te regulation of beauty filters, powered by AI, is important to combat these concerns and foster a safer digital environment. </p>
<p>While challenges are apparent, there seems to be a need for more stringent regulation of AI in social media. Mainly to prevent the exploitation of users for profit, but also to prevent facilitation of an increase in psychological and cognitive problems. Hopefully, with growing awareness and evolving technology, there will be an opportunity to mitigate AI’s potential risks through robust ethical guidelines and vigilant oversight. This might help to offer enriching user experiences without compromising psychological and cognitive health.</p>
<h3 id="references">References</h3>
<ol>
<li><p>Barocas, S., Hardt, M., &amp; Narayanan, A. (2019). Fairness and<br />Machine Learning: Limitations and Opportunities. fairmlbook.org.<br /><a href="http://www.fairmlbook.org">http://www.fairmlbook.org</a></p>
</li>
<li><p>Boerman, S. C., Kruikemeier, S., &amp; Zuiderveen Borgesius, F. J.<br />(2017). Online Behavioral Advertising: A Literature Review<br />and Research Agenda. Journal of Advertising, 46(3), 363-376.<br /><a href="https://doi.org/10.1080/00913367.2017.1339368">https://doi.org/10.1080/00913367.2017.1339368</a></p>
</li>
<li><p>Boers, E., Afzali, M. H., &amp; Conrod, P. (2020). Social me-<br />dia use and alcohol consumption in teens. Preventive Medicine.<br /><a href="https://www.sciencedirect.com/science/article/pii/S0091743520300165">https://www.sciencedirect.com/science/article/pii/S0091743520300165</a></p>
</li>
<li><p>Boers, E., Afzali, M. H., Newton, N., &amp; Conrod, P. (2019). So-<br />cial media usage and depression in adolescents. JAMA Pedi-<br />atrics. <a href="https://jamanetwork.com/journals/jamapediatrics/article-">https://jamanetwork.com/journals/jamapediatrics/article-</a><br />abstract/2737909</p>
</li>
<li><p>Center for Humane Technology. (2022, n.d, 7). Ledger of Harms. Ledger<br />of Harms. <a href="https://ledger.humanetech.com/">https://ledger.humanetech.com/</a></p>
</li>
<li><p>Fayyaz, Z., Ebrahimian, M., Nawara, D., Ibrahim, A., &amp; Kashef,<br />R. (2020). Recommendation Systems: Algorithms, Challenges, Met-<br />rics, and Business Opportunities. Applied Sciences, 10(21), 7748.<br /><a href="https://doi.org/10.3390/app10217748">https://doi.org/10.3390/app10217748</a></p>
</li>
<li><p>Hadar, A., Hadas, I., Lazarovits, A., Alyagon, U., Eliraz, D., &amp; Zargen, A.<br />(2017). Screen time and mental arithmetic in smartphone users. PLoS One.<br /><a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0180094">https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0180094</a></p>
</li>
<li><p>Hutton, J. S., Dudley, J., &amp; Horowitz-Kraus, T. (2019). Screen-<br />based media and children’s brain development. JAMA Pediatrics.<br /><a href="https://jamanetwork.com/journals/jamapediatrics/fullarticle/2754101">https://jamanetwork.com/journals/jamapediatrics/fullarticle/2754101</a></p>
</li>
<li><p>Judin, T. (2023, 17. Juli). Midlertidig forbud mot adferds-<br />basert markedsføring på Facebook og Instagram. Datatilsynet.<br /><a href="https://www.datatilsynet.no/aktuelt/aktuelle-nyheter-2023/midlertidig-">https://www.datatilsynet.no/aktuelt/aktuelle-nyheter-2023/midlertidig-</a><br />forbud-mot-adferdsbasert-markedsforing-pa-facebook-og-instagram/</p>
</li>
<li><p>Lemola, A., Perkinson-Gloor, N., Brand, S., &amp; Dewald-Kaufman, J. (2014).<br />Electronic media use at night and depressive symptoms. Journal of Youth<br />and Adolescence. <a href="http://dx.doi.org/10.1007/s10964-014-0176-x">http://dx.doi.org/10.1007/s10964-014-0176-x</a></p>
</li>
<li><p>Madigan, S., Browne, D. T., Racine, N., &amp; Mori, C. (2019). A<br />longitudinal study of screen time in children. JAMA Pediatrics.<br /><a href="https://jamanetwork.com/journals/jamapediatrics/fullarticle/2722666">https://jamanetwork.com/journals/jamapediatrics/fullarticle/2722666</a></p>
</li>
<li><p>Merriam-Webster. (n.d.). Artificial intelligence. In Merriam-Webster.com<br />dictionary. Retrieved October 15, 2023, from <a href="https://www.merriam-">https://www.merriam-</a><br />webster.com/dictionary/artificial%20intelligence</p>
</li>
<li><p>Merriam-Webster. (n.d.). Social media. In Merriam-Webster.com<br />dictionary. Retrieved October 15, 2023, from <a href="https://www.merriam-">https://www.merriam-</a><br />webster.com/dictionary/social%20media</p>
</li>
<li><p>Mickes, L., Darby, R. S., Hwe, V., Bajic, D., Warker, J. A., Harris,<br />C. R., &amp; Christenfeld, N. J. S. (2013). Major memory for microblogs:<br />What makes a message worth remembering? Memory &amp; Cognition.<br /><a href="http://dx.doi.org/10.3758/s13421-012-0281-6">http://dx.doi.org/10.3758/s13421-012-0281-6</a></p>
</li>
<li><p>Ozimek, P., Lainas, S., Bierhoff, HW. et al.(2023). How photo editing<br />in social media shapes self-perceived attractiveness and self-esteem via<br />self-objectification and physical appearance comparisons. BMC Psychology,<br />11 (99), 1-14. <a href="https://doi.org/10.1186/s40359-023-01143-0">https://doi.org/10.1186/s40359-023-01143-0</a></p>
</li>
<li><p>Uncapher, M. R., &amp; Wagner, A. D. (2018). Media multitasking and cogni-<br />tive abilities. Proceedings of the National Academy of Sciences, 115 (40),<br />9889-9894. <a href="https://www.pnas.org/content/115/40/9889">https://www.pnas.org/content/115/40/9889</a></p>
</li>
<li><p>van Geel, M., Vedder, P., &amp; Tanilon, J. (2014). Cyberbullying and<br />adolescent mental health: Systematic review. JAMA Pediatrics.<br /><a href="https://jamanetwork.com/journals/jamapediatrics/fullarticle/1840250">https://jamanetwork.com/journals/jamapediatrics/fullarticle/1840250</a></p>
</li>
<li><p>Ward, A. F., Duke, K., Gneezy, A., &amp; Bos, M. W. (2017). Brain drain:<br />The mere presence of one’s own smartphone reduces available cogni-<br />tive capacity. Journal of the Association for Consumer Research, 2 (2).<br /><a href="https://www.journals.uchicago.edu/doi/abs/10.1086/691462">https://www.journals.uchicago.edu/doi/abs/10.1086/691462</a></p>
</li>
</ol>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Nextcloud, Snap and Nginx reversed proxy]]></title>
            <link>https://test.cengelsen.no/en/blog/nextcloud-snap-and-nginx-reversed-proxy</link>
            <guid isPermaLink="false">https://test.cengelsen.no/en/blog/nextcloud-snap-and-nginx-reversed-proxy</guid>
            <pubDate>Sun, 11 Sep 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[An instruction on how to run Nextcloud installed through Snap behind an Nginx reverse proxy.]]></description>
            <content:encoded><![CDATA[<h2 id="what-is-nextcloud">What is Nextcloud?</h2>
<p>Nextcloud is an open source cloud storage alternative to Dropbox and Google Drive. You can run it locally on your own machine, Which means you&#39;re in control of your files at all times. With Nextcloud Hub, they aim to have one intregret solution for file storage, file sharing, preiketjenester, email services and document collaboration. In addition, it is also possible to install various apps into the Your Nextcloud installation. Although they started in 2016, they estimate that there are now over 400,000 Nextcloud machine servers online.</p>
<h2 id="explanation-of-problem-statement">Explanation of problem statement</h2>
<p>In my search for answers on the internet, I am left with the impression that this combination of configuration is not common. Therefore, I will try to be as clear as possible, to prevent confusion and that you waste your time. What I want to explain in this instruction is how to run a Nextcloud instance, installed and configured through Snap, behind an Nginx proxy server. So the proxy server takes care of the SSL security, while the service server takes care of everything else Nginx-related.</p>
<p>If you have configured an Nginx proxy server as described in [my proxy instructions]( {{ relref path=&quot;proxy-instruks.md&quot; lang=&quot;en&quot; }} ), then you now have two virtual machines; one virtual machine running Nginx as a proxy and one virtual machine running nginx as a &quot;regular&quot; web server. On the machine running a &quot;ordinary&quot; web server, then Nextcloud should also run. The problem is then that Nextcloud must communicate correctly with its local &quot;ordinary&quot; web server, and that the &quot;ordinary&quot; web server must communicate correctly with the proxy server so that I can reach the nextcloud service by going to <em><a href="https://example.domain.com">https://example.domain.com</a></em>.</p>
<h2 id="prerequisites">Prerequisites</h2>
<ol>
<li>The ability to install Snap. </li>
<li>64bit CPU and 64bit OS are recommended. </li>
<li>At least 128MB RAM per process, but 512MB RAM per process is recommended. </li>
<li>Nginx is configured as reverse proxy, [such as described here]({{relref. path=&quot;/proxy-instruction.md&quot; lang=&quot;en&quot; }}).</li>
</ol>
<h2 id="install-snap">Install Snap</h2>
<p>Installation of Snap itself may vary between different systems. A guide for different systems <a href="https://snapcraft.io/docs/installing-snapd">can be found here</a>. </p>
<h2 id="installing-nextcloud">Installing Nextcloud</h2>
<p>There are other ways to install Nextcloud that don&#39;t involve Snap, but here the point is that Nextcloud is installed through Snap. Once you have Snap, you can follow these steps: </p>
<ol>
<li><code>sudo snap install nextcloud</code> </li>
<li><code>sudo sudo nextcloud.manual-install *sudouser* *password*</code></li>
</ol>
<h2 id="configuring-nextcloud">Configuring Nextcloud</h2>
<p>In order for Nextcloud to communicate with Nginx correctly, some Nextcloud configuration is needed. You need to do the following: </p>
<ol>
<li><code>sudo snap stop nextcloud</code></li>
<li>Open the configuration file of nextcloud. For Ubuntu 20.04, it will be <code>/var/snap/nextcloud/31222/nextcloud/config/config.php</code>.</li>
<li>Add these lines to the bottom of the file:</li>
</ol>
<pre><code>&#39;overwritehost&#39; =&gt; &#39;example.domain.com&#39;,
&#39;overwriteprotocol&#39; =&gt; &#39;https&#39;,
&#39;overwritewebroot&#39; =&gt; &#39;/&#39;,
</code></pre>
<p>also change <code>&#39;trusted_proxies&#39;</code> to look like this:</p>
<pre><code>&#39;trusted_proxies&#39; =&gt;
  array (
    0 =&gt; &#39;*Proxy-server IP*&#39;,
  ),
</code></pre>
<p>If you don&#39;t have the <code>trusted_proxies</code> variable, you&#39;ll need to add it too. </p>
<p>It is also mentioned in many places on the web that you need to change the <code>overwrite.cli.url</code> variable to be <code>https://example.domain.com</code>, but I have it standing as it is, namely <code>https://localhost</code>.</p>
<ol>
<li><code>sudo nextcloud.disable-https</code></li>
<li><code>sudo snap set nextcloud ports.http=*free port*</code>. The same is possible for https: <code>sudo snap set nextcloud ports.https=*free port*</code></li>
<li><code>sudo snap start nextcloud</code> Then nextcloud has all the necessary configuration to be able to communicate with nginx.</li>
</ol>
<h2 id="configuring-nginx">Configuring Nginx</h2>
<p>In this case, the required configuration on both the Proxy server and the service server is required in advance for the configuration below to work. I am referring to the reservation that this instruction makes, which I mentioned earlier. </p>
<p>Nginx-config on Proxy Server:</p>
<pre><code class="language-bash">server {

        server_name *domain name*;

        location / {
                include /etc/nginx/proxy_params;

                proxy_pass http://*nextcloud-servers IP-address*/; # In LXD, you can also just type *container name*.lxd

        }


        real_ip_header proxy_protocol;
        set_real_ip_from 127.0.0.1;



    listen [::]:443 ssl; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/*example.domain.com*/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/*example.domain.com*/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}
server {
    if ($host = *example.domain.com*) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

        listen 80 proxy_protocol;
        listen [::]:80 proxy_protocol;

        server_name *example.domain.com*;
    return 404; # managed by Certbot


}
</code></pre>
<p>Nginx-config on the Nextcloud-server:</p>
<pre><code class="language-bash">server {

        listen 80;
        listen [::]:80;

        server_name *example.domain.com*;

        location / {
                proxy_pass_header   Server;
                proxy_set_header    Host $host;
                proxy_set_header    X-Real-IP $remote_addr;
                proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header    X-Forwarded-Proto $scheme;
                proxy_pass          http://127.0.0.1:*nextcloud.http-port*;
        }
}
</code></pre>
<h2 id="kilder">Kilder</h2>
<ol>
<li><a href="https://github.com/nextcloud-snap/nextcloud-snap/wiki/Putting-the-snap-behind-a-reverse-proxy">Putting the snap behind a reverse proxy</a>, last read 11.09.2022.</li>
<li><a href="https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-nextcloud-on-ubuntu-20-04">How To Install and Configure Nextcloud on Ubuntu 20.04</a>, last read 11.09.2022.</li>
<li><a href="https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/reverse_proxy_configuration.html">Nextcloud configuration &gt; Reverse proxy</a>, last read 11.09.2022.</li>
<li><a href="https://www.vanwerkhoven.org/blog/2021/setting-up-nextcloud-behind-https-nginx-proxy/">Setting up Nextcloud behind https nginx proxy</a>, last read 11.09.2022.</li>
</ol>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Nginx as reversed proxy in LXD]]></title>
            <link>https://test.cengelsen.no/en/blog/nginx-as-reversed-proxy-in-lxd</link>
            <guid isPermaLink="false">https://test.cengelsen.no/en/blog/nginx-as-reversed-proxy-in-lxd</guid>
            <pubDate>Sun, 11 Sep 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[An instruction on how to rig Nginx as a proxy server in LXD.]]></description>
            <content:encoded><![CDATA[<h2 id="dilemma">Dilemma</h2>
<p>What I want to do is have a common container that handles all the &quot;reverse proxy&quot; redirection and SSL termination for all the other containers on the server. I want to use Nginx for this Purpose.</p>
<h2 id="explanation">Explanation</h2>
<p>It might be a little hard to imagine what the logistics of it will be like. What you hope left by this instruction is one LXD container that acts as an Nginx proxy server, and At least one LXD container that retains the service you wish to expose to the internet. The point, then, is that The proxy server receives all requests from the Internet and forwards the traffic to the container that keeps on service. Both the proxy container and the service container both run an instance of Nginx who &quot;communicate&quot; with each other to direct web traffic correctly.</p>
<h2 id="installing-and-configuring-lxd">Installing and configuring LXD</h2>
<p>This instruction assumes that you have installed and configured LXD on your server. You can [follow my instructions]({{ relref path=&quot;lxd-instruks.md&quot; lang=&quot;en&quot; }}) to do so.</p>
<p>Once this is done, then you need to create &quot;devices&quot; for the proxy container. Outside the proxy container, you need to run:</p>
<pre><code class="language-bash">lxc config device add *name_of_container* *name_of_unit* proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 proxy_protocol=true

lxc config device add *name_of_container* *name_of_unit* proxy listen=tcp:0.0.0.0:443 connect=tcp:127.0.0.1:443 proxy_protocol=true
</code></pre>
<p>This is what the various parameters mean:</p>
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Explanation</th>
</tr>
</thead>
<tbody><tr>
<td><em>name_of_container</em></td>
<td>The name of the proxy container</td>
</tr>
<tr>
<td><em>device_name</em></td>
<td>The name of the &quot;device&quot; you are creating</td>
</tr>
<tr>
<td>proxy</td>
<td>What type of device you are creating</td>
</tr>
<tr>
<td>listen= tcp:0.0.0.0:80</td>
<td>The proxy device should listen on the host on port 80, protocol TCP, on all interfaces</td>
</tr>
<tr>
<td>connect= tcp:127.0.0.1:80 &nbsp; &nbsp;</td>
<td>The proxy device should connect to the container on port 80, protocol TCP, on the loopback interface. It is not possible to type &quot;localhost&quot;, only the IP address, in LXD versions &gt;= 3.13.</td>
</tr>
<tr>
<td>proxy_protocol</td>
<td>Requests to enable the proxy protocol, so that the reverse proxy obtains the original IP address from the proxy device</td>
</tr>
</tbody></table>
<p>Om du vil fjerne proxy-enheten, kan du skrive:</p>
<p><code>lxc config device remove *navn_på_container* *navn_på_enhet*</code></p>
<h2 id="installing-nginx">Installing Nginx</h2>
<p>How you install Nginx varies depending on which system you use. <a href="https://www.nginx.com/resources/wiki/start/topics/tutorials/install/">Here is an instruction on how to install Nginx on different systems</a>.</p>
<h2 id="configuring-nginx-in-the-service-container">Configuring Nginx in the service container</h2>
<p>Some configuration is needed for the Nginx running in the service container. Create <code>/etc/nginx/conf.d/real-ip.conf</code> in the service container:</p>
<pre><code class="language-sh">real_ip_header X-Real-IP;
set_real_ip_from *navn_på_proxy_container*.lxd;
</code></pre>
<p>Create an Nginx config, <code>/etc/nginx/sites-available/*config-name*</code>, in the service container:</p>
<pre><code class="language-sh">server {
        listen 80;
        listen [::]:80;

        server_name *domain-name*;

        root /path/to/website/folder;
        index index.html;

        location / {try_files $uri $uri/ =404;
        }
}
</code></pre>
<p>This configuration file may vary depending on the service&#39;s requirements for the Nginx configuration. The example above is for serving a static web page. Here, SSL termination is not needed, since the proxy server handles it.</p>
<h2 id="configuring-nginx-in-the-proxy-container">Configuring Nginx in the proxy container</h2>
<p>Create an Nginx config, <code>/etc/nginx/sites-available/*config-name*</code>, in the proxy container:</p>
<pre><code class="language-sh">server {
        listen 80 proxy_protocol;
        listen [::]:80 proxy_protocol;

        server_name *domain-name*;

        location / {
                include /etc/nginx/proxy_params;

                proxy_pass http://*name_of_service_container*.lxd;
        }

        real_ip_header proxy_protocol;
        set_real_ip_from 127.0.0.1;
}
</code></pre>
<p>Get SSL security through Certbot. This is a procedure in Ubuntu 20.04:</p>
<ol>
<li><code>lxc shell *proxy_container_name*</code></li>
<li><code>sudo add-apt-repository ppa:certbot/certbot</code></li>
<li><code>sudo apt-get install certbot python-certbot-nginx</code></li>
<li><code>sudo certbot --nginx</code></li>
</ol>
<ul>
<li>Agree</li>
<li>No</li>
<li><em>choose correct domain</em></li>
<li>2 (Redirect)</li>
</ul>
<ol start="5">
<li>Change the new lines in the nginx config to look like this:</li>
</ol>
<pre><code>listen 443 ssl proxy_protocol; # managed by Certbot
listen [::]:443 ssl proxy_protocol; # managed by Certbot
</code></pre>
<ol start="6">
<li><code>sudo systemctl restart nginx</code></li>
</ol>
<h2 id="kilder">Kilder</h2>
<ol>
<li><a href="https://www.linode.com/docs/guides/beginners-guide-to-lxd-reverse-proxy/">A Beginner&#39;s Guide to LXD: Setting Up a Reverse Proxy to Host Multiple Websites</a>, last read 12.09.2022.</li>
<li><a href="https://www.nginx.com/resources/wiki/start/topics/tutorials/install/">Nginx &gt; Tutorials &gt; Install</a>, last read 12.09.2022.</li>
</ol>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to configure LXD to run locally]]></title>
            <link>https://test.cengelsen.no/en/blog/how-to-configure-lxd-to-run-locally</link>
            <guid isPermaLink="false">https://test.cengelsen.no/en/blog/how-to-configure-lxd-to-run-locally</guid>
            <pubDate>Sun, 24 Jul 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[An instruction on how to set up LXD locally for virtualization.]]></description>
            <content:encoded><![CDATA[<h2 id="what-is-lxd">What is LXD?</h2>
<p>LXD is a CLI service for managing and operating virtual machines and containers. LXD is written in Go and is licensed under the Apache2 license. LXD can be divided into LXD (Linux Daemon) and LXC (Linux Containers). LXC is the software that enables the creation of virtual machines and containers. LXD is built on top of LXC, and supposedly improves LXC with better security, scalability, ease of use, and processing cost. Another small difference is that LXC uses the C API, while LXD uses the REST API.</p>
<p>LXD can also <a href="https://linuxcontainers.org/lxd/third-party-integrations/">be integrated into other platforms and tools</a>, such as. Ansible, Terraform and Juju. </p>
<p>Before installing it locally, you can also <a href="https://linuxcontainers.org/lxd/try-it/">try a demo of it here</a>! </p>
<h2 id="purpose-of-this-instruction">Purpose of this instruction</h2>
<p>This instruction aims to guide you to install and configure LXD so that you can use LXD on-premises for easy virtualization. Following this instruction, you should be left with an LXD environment where you can create new containers, start and stop them and delete them, as well as allowing these containers to communicate with each other. There are, of course, more advanced things one can do with LXD, but that&#39;s not covered here.</p>
<h2 id="dependencies">Dependencies</h2>
<p>If you install it through Snap, it&#39;s recommended to have at least 2GB of RAM. Besides that, it is also recommended that a ZFS file system is used as the &quot;storage pool&quot; for LXD. If so, LXD requires you to have installed <code>zfsutils-linux</code>. </p>
<p>One also relies on the mother card and processor to support virtualization. </p>
<h2 id="installation">Installation</h2>
<h3 id="snap">Snap</h3>
<p>LXD is available for download and installation through Snap. If you have Snap, you can run &#39;&#39;snap install lxd`. Since Ubuntu 20.04, LXD is already installed, as a Snap package, after a fresh installation of Ubuntu. </p>
<p>Installation of Snap itself may vary between different systems. A guide for different systems <a href="https://www.ubuntupit.com/how-to-install-snap-package-manager-in-linux-distributions/">can be found here</a>.</p>
<h3 id="manually">Manually</h3>
<p>Since LXD comes pre-installed with Ubuntu, I only had to think about configuration. </p>
<p>On many operating systems, Snap doesn&#39;t come with a fresh install, but one doesn&#39;t have to use Snap to install it. <a href="https://linuxcontainers.org/lxd/getting-started-cli/">Here&#39;s an overview</a> for installation on a few different systems without Snap.</p>
<h2 id="configuration">Configuration</h2>
<h3 id="user-rights">User rights</h3>
<p>Add your user to the lxd group.: </p>
<p>&#39;&#39;sudo adduser <name of user> lxd` </p>
<p>you can confirm by typing <code>id -nG</code>. If lxd is in that list, the user has lxd rights. </p>
<h3 id="storage-space">Storage space</h3>
<p>Before initializing LXD, it&#39;s important that you&#39;ve set aside the storage space you want to use. LXD creates a ZFS pool on the dedicated storage you&#39;ve set aside, so you don&#39;t have to worry about ZFS configuration on Advance.</p>
<h3 id="initialization">Initialization</h3>
<p>If it is the first time LXD is running on the machine, one must first initialize with <code>lxd init</code>. In this process, you get a series of questions to configure the storage space that LXD will use. They are as follows:</p>
<pre><code class="language-sh">Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: &lt;velg navn&gt;
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]: zfs
Create a new ZFS pool? (yes/no) [default=yes]: yes
Would you like to use an existing block device? (yes/no) [default=no]: yes
Path to the existing block device: path/to/storage/device/&lt;name of storage device&gt;
</code></pre>
<p>After that, storage setup is done. </p>
<h3 id="network">Network</h3>
<p>In the same initialization, the LXD network is also configured. In the same style as instead, you get a series of question. They are as follows:</p>
<pre><code class="language-sh">Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: yes
What should the new bridge be called? [default=lxdbr0]: lxdbr0
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
</code></pre>
<p>This allows for the following properties: </p>
<ul>
<li>Each container is automatically assigned a private IP address. </li>
<li>Each container can communicate with each other over the private network. </li>
<li>Each container can initiate contact with the internet </li>
<li>Each container remains inaccessible from the internet. A contact CANNOT be initiated from the internet to reach the container.</li>
</ul>
<h3 id="miscellaneous">Miscellaneous</h3>
<p>You also get 3 questions about miscellaneous things:</p>
<pre><code class="language-sh">Would you like LXD to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] yes
Would you like a YAML &quot;lxd init&quot; preseed to be printed? (yes/no) [default=no]: no
</code></pre>
<p>After this, a script runs in the background. If you don&#39;t get any feed, that&#39;s normal. </p>
<h2 id="managing-containers">Managing containers</h2>
<p>Although the service is called LXD, you use the command <code>lxc</code> to communicate with the LXD Hypervisor </p>
<h3 id="see-overview">See overview</h3>
<p>To list all the containers you have, type &#39;&#39;lxc list&#39;&#39;. To see the list of available commands, type <code>lxc -h</code>. </p>
<h3 id="create-a-container">Create a container</h3>
<p>To create a new container, type <code>lxc launch &lt;name of OS&gt;:&lt;version number&gt; &lt;name of container&gt;</code>. If you want to create a container with ubuntu 20.04 and named <em>webserver</em>, type &#39;&#39;lxc launch ubuntu:20.04 webserver`. This is going to create and start the container.</p>
<p>Technically, <name of OS> is the identifier for the preconfigured list of LXD image files. <version number> is the name of the image file. 20.04 is short for Ubuntu 20.04. </p>
<p>To see a complete list of available image files, type &#39;&#39;lxc image list images:<code>. You can also limit the list to just Ubuntu, by typing </code>lxc image list ubuntu:<code>&#39;. You can extract info about the image file by typing </code>lxc image info <name of os>:&#39;&#39;<OS version> &#39;&#39;. </p>
<h3 id="startstopdelete-a-container">Start/stop/delete a container</h3>
<p>Start the container: <code>lxc start </code>. Stop the container: <code>lxc stop </code>. Delete container: <code>lxc delete </code>. </p>
<h3 id="give-a-container-static-ip-address">Give a container static IP address</h3>
<p>LXD has a built-in DHCP server and can assign a random IP address to any container. The initial The IP address persists even if the container is rebooted. This is how to assign a static IP address to a container.</p>
<p>Overwrite the current NIC: </p>
<p><code>lxc config device override </code> </p>
<p>Give the container the static IP address and reboot: </p>
<p>`lxc config device set <name of container> <name of NIC> ipv4.address <ip-address>&#39;&#39;</p>
<p><code>LXC restart &lt;name of container&gt;</code></p>
<h2 id="expose-the-container-to-the-internet">Expose the container to the internet</h2>
<p>After all this configuration, it is still not possible to access the container from the internet. Therefore, you must create an iptables rule so that network traffic is forwarded to the container. </p>
<p>This requires 2 things: Your public IP address and the container&#39;s IP address. </p>
<h3 id="installing-nginx">Installing Nginx</h3>
<p>A good way to test if you have achieved the iptables rule is to install Nginx in the container. First you have to get into Container. This can be done by typing <code>lxc shell &lt;name of container&gt;</code>. Then you &quot;ssh&quot; into the container. You can also type <code>lxc exec &lt;name of container&gt; -- sh -c &quot;&lt;set of commands&gt;&quot; </code> if you only want to execute commands in Container. Next, install Nginx in the conventional way for the OS you have chosen. for Ubuntu, it just becomes:</p>
<pre><code class="language-sh">apt-get update &amp;&amp; apt-get install nginx
</code></pre>
<h3 id="open-to-network-traffic">Open to network traffic</h3>
<p>This step is quite dependent on your local network configuration, but if you don&#39;t have any special network configurations, this should work:</p>
<pre><code class="language-sh">PORT=80 PUBLIC_IP=&lt;public IP-address&gt; CONTAINER_IP=&lt;container IP-address&gt; IFACE=&lt;name of NIC&gt; \
sudo -E bash -c &#39;iptables -t nat -I PREROUTING -i $IFACE -p TCP -d $PUBLIC_IP \
--dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment &quot;&lt;some comment&gt;&quot;&#39; \
</code></pre>
<p>Explanation: </p>
<ul>
<li><code>t nat</code> specifies that you should use the NAT table for address translation </li>
<li><code>-I PREROUTING</code> specifies that you add the rule to the link called &quot;PREROUTING&quot;</li>
<li><code>-i $FACE</code> specifies the NIC to use </li>
<li><code>-p TCP</code> specifies that the TCP protocol should be used </li>
<li><code>-d $PUBLIC_IP</code> specifies the IP address that is the destination of the rule </li>
<li><code>--dport $PORT</code> specifies the port that is the destination of the rule</li>
<li><code>-j DNAT</code> specifies that you should perform a &quot;jump&quot; to destination NAT (DNAT)</li>
<li><code>-to-destination $CONTAINER_IP:$PORT</code> specifies that the request should be sent to the container&#39;s IP address on the specific port.<br />For å se en nummerert liste over reglene, kan du skrive <code>sudo iptables -t nat -L PREROUTING --line-numbers</code>.</li>
</ul>
<p>These rules must be applied again every time you restart the machine. Pretty boring. So then you can install <code>iptables-persistent</code> package. Then the rules are applied automatically every time you restart the machine. </p>
<p>To remove an IP table rule, you can type <code>sudo iptables -t nat -D PREROUTING &lt;number of rule&gt;</code>. In addition You should save the change by typing <code>sudo netfilter-persistent save</code>. Then the rule is not reapplied again at the next omstaAdditionally</p>
<h2 id="tldr">TL;DR</h2>
<p>Installing and Configuring LXD on an Ubuntu 20.04</p>
<ol>
<li><code>sudo apt-get update &amp;&amp; sudo apt-get upgrade</code></li>
<li><code>sudo adduser &lt;username&gt; lxd</code></li>
<li><code>sudo apt-get install -y zfsutils-linux</code></li>
<li><code>sudo lxd init</code></li>
</ol>
<ul>
<li>no</li>
<li>yes</li>
<li><name of storage pool></li>
<li>zfs</li>
<li>yes</li>
<li>yes</li>
<li>/path/to/<name of storage device></li>
<li>No</li>
<li>yes</li>
<li>lxdbr0</li>
<li>auto</li>
<li>auto</li>
<li>no</li>
<li>yes</li>
<li>no</li>
</ul>
<ol>
<li><code>lxc launch &lt;os-name&gt;:&lt;os-version&gt; &lt;container-name&gt;</code></li>
<li><code>lxc config device override &lt;container-navn&gt; &lt;NIC-name&gt;</code></li>
<li><code>lxc config device set &lt;container-navn&gt; &lt;NIC-name&gt; ipv4.address &lt;container IP-address&gt;</code></li>
<li><code>lxc restart &lt;container&gt;</code></li>
<li><code>lxc exec &lt;container-name&gt; -- sh -c &quot;apt-get update &amp;&amp; apt-get upgrade &amp;&amp; apt-get install nginx&quot;</code></li>
<li><code>PORT=80 PUBLIC_IP=&lt;public IP-address&gt; CONTAINER_IP=&lt;container IP-address&gt; IFACE=&lt;name of NIC&gt;  sudo -E bash -c &#39;iptables -t nat -I PREROUTING -i $IFACE -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment &quot;&lt;some comment&gt;&quot;&#39;</code></li>
<li><code>sudo apt-get install iptables-persistent</code></li>
<li>profitt</li>
</ol>
<h3 id="shell-script">Shell-script</h3>
<p>Create and start a new container</p>
<pre><code class="language-sh">#!/bin/sh

lxc list --columns ns4

read -p &quot;Name of OS? &#39;OS:version&#39;:&quot; &quot;osName&quot;
read -p &quot;Name of container?:&quot; &quot;containerName&quot;
read -p &quot;IP of container?:&quot; &quot;containerIP&quot;
read -p &quot;Name of NIC:&quot; &quot;nicName&quot;
read -p &quot;Servers public IP-address?:&quot; &quot;publicIP&quot;
read -p &quot;Port to receive network traffic?:&quot; &quot;portNr&quot;

lxc launch $osName $containerName

lxc config device override $containerName $nicName
lxc config device set $containerName $nicName ipv4.address $containerIP

lxc restart $containerName

read -p &quot;Forward incoming connections to container? [yes/no]:&quot; &quot;fwdTraffic&quot;

if [ $fwdTraffic = &quot;yes&quot; ]; then

        read -p &quot;Comment for Ip-table rule?:&quot; &quot;iptComment&quot;

        PORT=$portNr PUBLIC_IP=$publicIP CONTAINER_IP=$containerIP IFACE=$nicName \
        sudo -E bash -c &#39;iptables -t nat -I PREROUTING -i $IFACE -p TCP -d $PUBLIC_IP \
        --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment &quot;$iptComment&quot;&#39;
	sudo netfilter-persistent save
	echo &quot;Added rule to IP-table. The container is now ready.&quot;
else
        echo &quot;Skipped forwarding network traffic. The container is now ready.&quot;
fi
</code></pre>
<h2 id="optional-installing-lxdui">(Optional) Installing LXDUI</h2>
<p>I haven&#39;t tried it myself yet, but will update this instruction once I&#39;ve implemented it. You can also install a visual user interface in your browser, with <a href="https://github.com/AdaptiveScale/lxdui/wiki">This GitHub project</a>.</p>
<h2 id="kilder">Kilder</h2>
<p>Ser du noe galt med denne instruksjonen? <a href="https://github.com/Cengelsen/cengelsen.no">Opprett en pull request!</a></p>
<ul>
<li><p><a href="https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-lxd-on-ubuntu-20-04">How to install and Configure LXD on Ubuntu 20.04</a>, sist lest 24.07.2022.</p>
</li>
<li><p><a href="https://linuxcontainers.org/lxd/getting-started-cli/">LXD &gt; Getting started &gt; Installation</a>, sist lest 24.07.2022</p>
</li>
<li><p><a href="https://www.geeksforgeeks.org/difference-between-lxc-and-lxd/">Difference between LXC and LXD</a>, sist lest 25.07.2022</p>
</li>
<li><p><a href="https://discuss.linuxcontainers.org/t/comparing-lxd-vs-lxc/24">Comparing LXD vs. LXC</a>, sist lest 25.07.2022</p>
</li>
</ul>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A Review of the Innovative Power of the Oculus Rift]]></title>
            <link>https://test.cengelsen.no/en/blog/a-review-of-the-innovative-power-of-the-oculus-rift</link>
            <guid isPermaLink="false">https://test.cengelsen.no/en/blog/a-review-of-the-innovative-power-of-the-oculus-rift</guid>
            <pubDate>Wed, 27 Apr 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[This is a text i wrote in 2020 about VR-technology, it's history and how the Oculus Rift was an innovative success.]]></description>
            <content:encoded><![CDATA[<p>What is real? Not only is this an existential question, but a source of creation, innovation, and<br />curiosity. We explore the answers to this question by delving into the virtual realm, and our tool has,<br />since the 60’s, been VR-technology. Although, recently it has taken up an increasingly larger part of<br />the videogame community. Starting with several failed attempts in the mid-90’s and continued by<br />Palmer Luckey’s highly successful <em>Oculus Rift</em>, This new headset sparked new life into the VR-<br />community, including VR-research and development.</p>
<p>In this text, I will use the theories of McQuivey, Rogers, Simon and Winston in the fields of design and<br />innovation to explain the VR-headsets, more specifically the <em>Oculus Rifts</em> innovative effect.</p>
<p>VR/AR-technology can come in different formats, although the most recognizable to the average<br />consumer is the Head-Mounted Display (HMD). In the beginning, an HMD was heavy, mechanical and<br />had little application other than scientific research. However, over time, the HMD became more<br />electronic with features like motion tracking, 3D stereoscopic imaging, built-in stereo headphones<br />and peripheral controllers.</p>
<p>Virtual Reality (VR) is an artificially created simulation of any reality, altered or otherwise. The<br />hardware can vary from a simple mask with optics that manipulates your vision; to an HMD giving<br />the experience of a fully computer-generated world which can simulate any 3D-environment.<br />Currently, VR is mainly used for entertainment, art design or scientific research. Although, the<br />application can also be extended to remote control of robots, training simulations and psychological<br />treatment.</p>
<p>Augmented Reality (AR), however, differs from VR in that while it does not simulate a reality, it alters<br />our perception of the actual reality. AR-technology can either alter our perception of physical reality;<br />or give us new mechanisms in physical reality to manipulate artificial objects projected from the<br />digital world into the physical world. A great example of this is the VIDEOPLACE technology<br />developed by Myron Krueger. (Virtual Reality Society, n.d)</p>
<p>The first ever occurrence of VR/AR-technology is the <em>Wheatstone mirror stereoscope</em> , invented by Sir<br />Charles Wheatstone in the 1840’s. It was simply a board with two pictures on either side of it. In<br />the middle, there was a pair of mirrors at 45 degree angles to the user&#39;s eyes, each reflecting the<br />picture located off to each side of the board.<br />More than a hundred years later, the <em>Sensorama</em> was invented by Morton Heilig in 1962, and<br />introduced to the public. This was a stationary booth, reminiscent of an arcade machine from the<br />80’s and could fit up to 4 people at once. One could experience up to six different films in all 5<br />senses. It featured scent producers, a vibrating chair, stereo speakers and a stereoscopic 3D screen.</p>
<p>Heilig also patented the first HMD called the <em>Telesphere mask</em>. This provided the user with<br />stereoscopic 3D images and stereo sound. This inspired Comeau and Bryan to create the <em>Until<br />Headsight</em> for the US military, the first HMD with motion tracking.<br />in 1966, the <em>Sayre Gloves</em> were created by Sandin and Defanti, and were the first wired glove<br />peripherals. These gloves used light emitters and photocells in the hands’ fingers to monitor<br />movement. In other words, the movement of the fingers disrupted the light emitting into the<br />photocell, which was converted into electrical signals.</p>
<p>In 1984 , VPL Research, Inc. was founded. They were contracted by NASA in 1986 to make a range of<br />VR-hardware. During their contract, they went on to develop the <em>DataGlove</em> (glove peripheral), the<br /><em>EyePhone</em> (HMD), the <em>AudioSphere</em> (3D sound software) and the <em>DataSuit</em> (bodysuit). (Virtual Reality<br />Society, n.d)<br />In 1989, Mattel, Inc. launched the <em>PowerGlove</em> , inspired by the <em>DataGlove</em>. The <em>PowerGlove</em> was<br />developed as a controller accessory for the Nintendo Entertainment System (NES), but never caught on<br />as a product because of its difficulty in use.</p>
<p>2 years later, in 1991, a company called <em>The Virtuality Group</em> launched a VR arcade game called<br /><em>Virtuality</em>. This was the first mass-produced VR entertainment system. It featured real-time<br />stereoscopic 3D images through an HMD with built-in stereo headphones, joysticks, a microphone<br />and multi-player capability.<br />In 1995, three commercial VR-headsets were produced for mass-production. This is the year the<br /><em>VFX1 Headgear</em> was released by Forte Technologies, as well as Victormaxx’s <em>Cybermaxx</em> and the<br />Virtual IO’s <em>I-Glasses</em> (Delaney, 1996). They all failed within a short time.<br />At this point, VR had lost its appeal to the general public. But 16 years later, Palmer Luckey created<br />the first prototype of the Oculus Rift. Starting a new chapter in the history of VR-technology. But how<br />did this come to be?</p>
<p>While Luckey was working in a mobile phone repair shop, he had a hobby of buying and testing old<br />VR-headsets. He found them inadequate, none of them living up the expectations of immersing him<br />in a different virtual world.<br />Later as he attended a local community college, he started tinkering on his own prototype for a new<br />VR-headset, combining parts of old VR-headsets and screens from mobile phones. He regularly<br />updated a 3D-gaming forum online of his progress. Eventually after he had produced several<br />prototypes, John Carmack, co-founder of id Software, stumbled upon his posts on the forum.<br />Carmack contacted Luckey and was promptly given the offer of receiving one of the prototypes for<br />free. The prototype blew Carmack away and he immediately saw the potential of this technology.<br />Carmack repurposed a copy of “Doom 3” to work with the prototype, adjusting his own 3D software<br />to accommodate for the prototypes shortcomings. Later that same year, he showed the prototype to<br />a plethora of journalists on the E3 gaming convention.</p>
<p>Over the next 21 months, Luckey partnered up with Brendan Iribe, Nate Mitchell and Michael<br />Antoinov to found Oculus VR. John Carmack also joined their company a little later as CTO. They<br />continued development and launched a Kickstarter project to fund the production of a DIY kit, called<br />the <em>Development Kit 1</em>.<br />In a month, they raised 2.4 million dollars, and because of this massive success, it drew the attention<br />of Mark Zuckerberg, founder of Facebook. After some negotiations, the company was sold to<br />Facebook with a price tag of 2 billion dollars (Kumparak, 2014). This was a massive success for Oculus<br />VR, since it now had a virtually endless amount of funding for further development and production.<br />Since then, 3 more versions of the <em>Oculus Rift</em> was developed and sold, with it’s latest being the<br /><em>Oculus Rift S</em>. But how come the <em>Oculus Rift</em> was this successful? We might find answer in researchers<br />of innovation dynamics.</p>
<p>Simon ( 1996 ) separates the artificial from the natural and claims that an artificial system has 3<br />important elements vital to its survival.</p>
<ol>
<li>The inner environment</li>
<li>The outer environment</li>
<li>The task environment</li>
</ol>
<p>The inner environment must be designed such that it responds to the criteria of the outer<br />environment, however they can only interact through the task environment. In the case of VR-HMDs,<br />it looks a little bit different. This is the task environment of the Oculus Rift.</p>
<ol>
<li>The inner environment (VR-HMD)</li>
<li>The 1st outer environment (the computer)</li>
<li>The 2nd outer environment (the person/operator)</li>
<li>The task environment (Giving the operator a means of experiencing and manipulating a<br /> simulated environment)</li>
</ol>
<p>The <em>Oculus Rift’s</em> inner environment consists of 2 small screens, stereo headphones and an<br />ergonomic, low-weight helmet to carry the screens and headphones. In addition, it also has a<br />“motion tracking”-module to register the helmets orientation in space. The screens are maintained<br />by the helmet, such that each screen goes to each eye. And of course, the headphones are also<br />separated such that each speaker goes to each ear. There is also a protective visor around the<br />screens, such that no discernible light can enter the operator’s line of sight (Desai, P.R., Desai, P.N.,<br />Ajmera, &amp; Mehta, 2014 ).</p>
<p>The inner environment therefore has the means of receiving visual and audio information from the 1st<br />outer environment. It also has the means to feed this information to the 2nd outer environment,<br />through the screens and speakers. In turn, this makes the 2nd outer environment stimulate the inner<br />environment’s gyroscope. This, through the inner environment, feeds the 1st outer environment with<br />orientation data, which it uses to alter the visual information received by the inner environment.<br />The computer, with its hardware, runs the software necessary for the 3D-environment to exist, while<br />the HMD serves as the interface between the operator and the simulated environment.</p>
<p>The <em>Oculus Rift</em> is a collection of subsystems that each has its different function, using its own<br />mechanism, which adds to the performance of the entire inner system’s function. Reducing each<br />subsystems function to perform, reduces the quality of the whole inner systems function.</p>
<p>There have been several iterations of the <em>Rift</em> , with <em>DK1</em> , the <em>Crystal Cove</em> , <em>DK2</em> , <em>Crescent Bay</em> , the <em>CV</em><br />and finally the <em>Rift S</em>. The <em>CV1</em> , or <em>Consumer Version 1</em> , was the first marketable version produced by<br />Oculus VR. However, both the <em>DK1</em> and <em>DK2</em> were sold to the people backing Luckey’s Kickstarter<br />project. This is Luckey’s way of simulating the task environment, improving the function of the inner<br />environment for every iteration (Winston, 1998). For every version, the resolution, latency, Degrees<br />of Freedom, processing power, software compatibility and ergonomics have been improved. This has<br />not only improved the task environment, but also given us new knowledge about what is needed to<br />effectively simulate a reality.</p>
<p>The main reason for the <em>Oculus Rift’s</em> inner environment serving its intended purpose when previous<br />attempts by other companies have not, is cybersickness. Cybersickness is an affliction induced by VR-<br />technology, mainly HMDs, causing symptoms similar to motion sickness, but also headaches and<br />eyestrain (LaViola, 2000). Because of the <em>Rift’s</em> ability to not induce cybersickness in its operators,<br />people can use it through extended periods of time. The main proponent of this ability is its low<br />latency (2ms – 30ms), its wide field of view (100 ֯- 110 ֯ ) and its superior orientation tracking<br />(accelerometer, gyroscope, magnetometer). This feeds the operator with real-time changes in visuals<br />that eliminates conflicting vestibular and spatial perception.</p>
<p>One of the design constraints have been not to induce cybersickness in its users. This is a hard<br />constraint. This is because inducing cybersickness renders the 2nd outer environment incapable of<br />interacting with the 1st outer environment through the the inner environment. In addition, it also had<br />to be of low weight, have an affordable price and not limit the operator in its degrees of freedom.<br />However, these were only soft constraints.</p>
<p>The first iteration of the <em>Oculus Rift</em> , however, was not the optimal solution to these design<br />constraints, but it was a satisficing one. Operators of the first iteration, the <em>DK1</em> , reported the image<br />afflicted by a “Screen Door”-effect. In short, this means that the operator could notice holes between<br />each pixel, making the entire image look like a screen door. This was a hinderance for total<br />immersability. It also only had 3 degrees of freedom, which is the rotational motion of pitch (x-axis),<br />yaw (y-axis) and roll (z-axis) in a 3D space. All later iterations also included translational motion such<br />as turning, tilting and pivoting on all axes, resulting in a total of 6 degrees of freedom. Later iterations<br />also have a reduced “Screen Door”-effect.</p>
<p>The <em>Rift</em> has been adapted to human limitations, such as a human’s perception of vestibular and<br />spatial orientation. Had it not accounted for these limitations, the HMD would have induced<br />cybersickness and failed in serving its intended purpose. VR-HMDs is a peculiar instance of the<br />artificial. Since, if the limitations and expectations are not accommodated for, the inner environment<br />is rendered unusable.</p>
<p>According to Winston (Media Technology and Society, Winston. S.3), technology is the explicit<br />utterances in the language of science. Meaning that we social creatures, humans, use the<br />development of technology, of mechanical or electrical devices, to express the scientifically<br />established.<br />The process of eventually arriving at the production of an invention, starts with “ideation”, which<br />simply means coming up with an idea, on the basis of science. And since science inspired it,<br />technology becomes the expression of the idea.</p>
<p>The first mention of the VR-HMD as we know it today, was by Stanley G. Weinbaum in his science<br />fiction story, <em>Pygmalion’s Spectacles</em>, published in 1935. This “ideation” inspired Morton Heilig, and<br />later Ivan Sutherland, to create their prototypes; <em>Sword of Damocles</em> and <em>The Ultimate Display</em>,<br />respectively, during the 1960s (Virtual Reality Society, n.d). Both prototypes can be classified as<br />rejected, since no supervening social necessity had operated yet, and no possible application was<br />apparent.</p>
<p>However, research continued for the US Airforce, and later NASA, which accumulated further the<br />competence of scientific knowledge. This reinforces Winston’s affirmation that this process is<br />governed by an <em>accelerator-brake</em> dynamic (Winston, p.11). As science further expanded its<br />competence in computer science, several more prototypes were built. Such as VPL Research’s<br /><em>EyePhone</em> , SEGA’s <em>SEGA VR</em> and Oculus VR’s <em>Oculus Rift</em>.</p>
<p>The <em>EyePhone</em> is an example of a parallel prototype, since it was developed by VPL for NASA’s Space<br />Flight Simulations. Meanwhile, <em>SEGA VR</em> was an example of a partial prototype, since it was designed<br />to perform effectively as a VR-HMD, but did not perform as well as intended, and ultimately rejected.<br />The <em>Oculus Rift</em> , however, was an accepted prototype. This is because of the &quot;accelerator-brake&quot;<br />dynamic was in its &quot;brake&quot;-period up until now, thus creating a supervening social necessity for a new<br />prototype. I consider the response to the <em>Oculus Rift</em>&#39;s Kickstarter project as evidence of this (Oculus<br />VR, 2016).</p>
<p>However, the first version of the <em>Rift</em>, called <em>PR1</em>, was not carried on tripartite phase of Winston’s<br />technological performance (Winston, 1998). There were several versions after, such as the <em>DK1</em>,<br /><em>Crystal Cove</em>, <em>DK2</em>, <em>Crescent Bay</em>, <em>CV1</em> and <em>Rift S</em>. In accordance with Winston’s classifications, I can<br />further distinguish each iteration of the <em>Rift</em> as follows:</p>
<ul>
<li><p>PR1 – accepted prototype, but did not go on as a production, and did not become an invention.</p>
</li>
<li><p>DK1 – accepted prototype, went on production and became an invention. However, it became redundant after later versions were released.</p>
</li>
<li><p>Crystal Cove – accepted prototype, but did not go on as a production and therefore did not become an invention.</p>
</li>
<li><p>DK2 – accepted prototype, went on to production and became an invention. However, it became redundant after later versions were released.</p>
</li>
<li><p>Crescent Bay – accepted prototype, but did not go on as a production and therefore did not become an invention.</p>
</li>
<li><p>CV 1 – accepted prototype, went on as a production and is an invention. However, it is not redundant yet.</p>
</li>
<li><p>Rift S – is a continuation of CV1, therefore accepted. It has gone to production, become an invention and is not redundant as of today.</p>
</li>
</ul>
<p>According to Winston, all technological advancements is subject to the “law” of the suppression of<br />radical potential (Winston, p. 11 ). Is this the case of the <em>Oculus Rift</em>?</p>
<p>I would argue that it has not reached it yet, but perhaps in a few years. The reason for this is the vast<br />range of parallel prototypes, in form of software, being developed for a plethora of VR-HMDs. The<br />invention of the <em>Oculus Rift</em> has created a new supervening social necessity. The <em>Oculus Rift</em> is just<br />one of several VR-headsets acting as tools for software developers in their creative ventures.<br />Examples of this are the increasing amount of VR-games available, as well as graphic design tools<br />supporting VR, such as <em>Blender</em>, <em>Gravity Sketch</em>, <em>Facebook Quill</em> and <em>Oculus Medium</em>.</p>
<p>The subject of the <em>Oculus Rift</em> creating a new supervening social necessity, can arguably make it a<br />disruptive innovation, and a digital one at that. Palmer Luckey has truly acted as a digital disruptor up<br />until his company’s acquisition by Facebook.<br />Using only himself as labour for development of the first prototype, using spare parts from older VR-<br />technology and new mobile phones, while consulting internet forums for advice. Thereby, he kept<br />manufacturing and research costs low, and eventually introducing it to viable investors through the<br />internet. </p>
<p>After the company was founded, he looked for further funding through Kickstarter.com and<br />acquired 2.4 million dollars for the production of <em>Development Kit 1</em>.<br />After the production of <em>Development Kit 1</em> , he also released the SDK for developers to engineer their<br />own systems for the HMD. Thereby, giving any owner of the DK1 the ability to become a digital<br />disruptor in the brand-new market of VR-software development. However, how has the innovation<br />been adopted by society? Where are we now and what is the future for the diffusion of this<br />innovation?</p>
<p>Rogers ( 2003 ) has categorized the different groups of innovation adopters. Based on statistical rules,<br />he also splits them into percentages of the population.</p>
<ol>
<li>Innovators – 2.5%</li>
<li>Early Adopters – 13.5%</li>
<li>Early Majority – 34%</li>
<li>Late Majority – 34%</li>
<li>Laggards – 16%</li>
</ol>
<p>In the case of the <em>Oculus Rift</em>, the &quot;innovators&quot; at this stage were Palmer Luckey, the co-founders of<br />Oculus VR and several other employees of Oculus VR. I also estimate the percentage of &quot;innovators&quot;<br />was a bit lower than Rogers estimates, in this case. Through internet forums and Kickstarter, he<br />reached the &quot;early adopters&quot;, who was later sold the <em>DK1</em> and <em>DK2</em> versions. They were happily ready<br />to adopt this innovation, mainly because of its technological abilities, but also its affordable price.<br />The low price, and peer pressure from early adopters, enabled the &quot;early majority&quot; to actively adopt<br />this new innovation.<br />The people who are buying the <em>Oculus Rift</em> at this point in time, are acting deliberately to acquire this<br />innovation. And by extension, influencing the &quot;late majority&quot; to do so as well.</p>
<p>However, even if the <em>Rift</em> S is reasonably priced, at around 600$, most people see it as a huge<br />monetary setback. It is more expensive than a laptop, a gaming console or a new phone, and is still<br />mainly functioning as an entertainment system. However, the applicability extends further. There<br />exists a plethora of graphic design tools with support for the <em>Oculus Rift</em> , as previously mentioned. I<br />would argue that the current pricetag is the greatest hindrance for this innovation to diffuse into the<br />&quot;late majority&quot;. Had the price been lower, the &quot;late majority&quot;, being the skeptics as they are, would<br />adopt this quite rapidly. (Rogers, 2003)</p>
<p>Furthermore, my prediction is that the evolution of VR-HMD as a PC-peripheral is synonymous with<br />the evolution of the Personal Computer becoming a household necessity. The sooner this<br />comparison becomes apparent for the &quot;late majority&quot;, the sooner they will adopt this new innovation.<br />They will see it a necessary tool to perform their work, to enjoy their entertainment or for regular<br />communication with their peers.<br />This, in turn, will affect the &quot;laggards&quot; as the price of the HMD will become increasingly lower, making<br />it a safer innovation for them to adopt.</p>
<p>In conclusion, the <em>Oculus Rift</em> has been an enormous innovative success and a breath of fresh air in<br />the videogame industry. It is an amazing technological venture, which has succeeded as an<br />innovation mainly because of its ability to sustain the task environment. As an innovation it has yet to<br />diffuse into the &quot;late majority adopters&quot;, however someday it will become a household PC-peripheral.<br />Maybe even a standalone device. Which removes some of the physical boundaries set upon us by the currently normalized, soon to be redundant, human computer interface.</p>
<h2 id="sources">Sources:</h2>
<p>Anthes, C., García-Hernandez, R., Kranzlmüller, D., Wiedemann, M. (2016, March). <em>State of the Art of Virtual Reality Technology</em>. Paper presented at IEEE Aerospace Conference, Big Sky, Montana, United States. Retrieved from: <a href="https://www.researchgate.net/profile/Ruben_Garcia_Hernandez/publication/297760223_State_of_the_Art_of_Virtual_Reality_Technologies/links/59f2efbe0f7e9beabfcc7ef3/State-of-the-Art-of-Virtual-Reality-Technologies.pdf">https://www.researchgate.net/profile/Ruben_Garcia_Hernandez/publication/297760223_State_of_the_Art_of_Virtual_Reality_Technologies/links/59f2efbe0f7e9beabfcc7ef3/State-of-the-Art-of-Virtual-Reality-Technologies.pdf</a></p>
<p>Clark, T. (2014, November). How Palmer Luckey Created Oculus Rift. Retrieved May 14, 2020, from: <a href="https://www.smithsonianmag.com/innovation/how-palmer-luckey-created-oculus-rift-180953049/?page=">https://www.smithsonianmag.com/innovation/how-palmer-luckey-created-oculus-rift-180953049/?page=</a>.</p>
<p>Delaney, J. (1996, June 11). VR Headsets: Ready For Prime Time? <em>PC Magazine, 15(11), pp. 388-392.</em> Retrieved from: <a href="https://books.google.no/books?id=-p0J8W4KrksC&pg=PA388&lpg=PA388&dq=weight+of+vfx1+headgear&source=bl&ots=RWf3ZoD7Os&sig=ACfU3U0CiqyMEMKY0r55d0GD1put5x9_Tw&hl=no&sa=X&ved=2ahUKEwjCgfGylqDpAhUNmIsKHaQhBpYQ6AEwAXoECAoQAQ#v=onepage&q=vfx1%20headgear&f=false">https://books.google.no/books?id=-p0J8W4KrksC&amp;pg=PA388&amp;lpg=PA388&amp;dq=weight+of+vfx1+headgear&amp;source=bl&amp;ots=RWf3ZoD7Os&amp;sig=ACfU3U0CiqyMEMKY0r55d0GD1put5x9_Tw&amp;hl=no&amp;sa=X&amp;ved=2ahUKEwjCgfGylqDpAhUNmIsKHaQhBpYQ6AEwAXoECAoQAQ#v=onepage&amp;q=vfx1%20headgear&amp;f=false</a>.</p>
<p>Desai, P.R., Desai, P.N., Ajmera, K.D., Mehta, K. (2014). A Review Paper on Oculus Rift: A Virtual Reality Headset. <em>International Journal of Engineering Trends and Technology (IJETT), V13</em> (4), pp. 175-179. DOI: <a href="https://doi.org/10.14445/22315381/IJETT-V13P">https://doi.org/10.14445/22315381/IJETT-V13P</a>.</p>
<p>Jovanović, A., Milosavljević, A. (2017, June) <em>Review of Modern Virtual Reality HMD Devices and Development Tools</em>. Paper presented at the 52nd International Scientific Conference on Information, Communication and Energy Systems and Technologies, Nis, Serbia. Retrieved from: <a href="http://rcvt.tu-sofia.bg/ICEST2017_40.pdf">http://rcvt.tu-sofia.bg/ICEST2017_40.pdf</a>.</p>
<p>Kumparak, G. (2014, March 26). A Brief History Of Oculus. Retrieved May 14, 2020, from: <a href="https://techcrunch.com/2014/03/26/a-brief-history-of-oculus/">https://techcrunch.com/2014/03/26/a-brief-history-of-oculus/</a>.</p>
<p>LaViola, J.J. (2000). A Discussion of Cybersickness In Virtual Environments. <em>SIGCHI Bull. 32 (1)</em> , 47–56. DOI: <a href="https://doi.org/10.1145/333329">https://doi.org/10.1145/333329</a>.</p>
<p>McDonald, T. L. (1994, August). Are We Having Virtual Fun Yet? <em>PC Gamer, 1</em> (3), pp. 44-49. Retrieved from: <a href="https://archive.org/details/PCGamer199408/page/n45/mode/2up">https://archive.org/details/PCGamer199408/page/n45/mode/2up</a>.</p>
<p>McQuivey, J. (2013). <em>Digital Disruption: Unleashing the Next Wave of Innovation.</em> Las Vegas, NV:<br />Amazon Publishing.</p>
<p>Oculus VR. (2016, January 30). Oculus Rift: Step Into The Game. Retrieved May 14, 2020, from: <a href="https://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-game">https://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-game</a>.</p>
<p>Rogers, E.M. (2003). <em>Diffusion of Innovation</em>. New York: Free Press.</p>
<p>Simon, H. A. (1996). <em>The Sciences of the Artificial.</em> Cambridge, MA: The MIT Press.</p>
<p>Vashishtha, Y. (2018, November 1) Palmer Luckey: The Home School Kid Who Brought a Revolution in the Virtual Reality. Retrieved May 14, 2020, from: <a href="http://www.yourtechstory.com/2018/11/01/palmer-luckey-home-school-kid-brought-revolution-virtual-reality/">http://www.yourtechstory.com/2018/11/01/palmer-luckey-home-school-kid-brought-revolution-virtual-reality/</a>.</p>
<p>Virtual Reality Society. (n.d) History of Virtual Reality. Retrieved May 14, 2020, from: <a href="https://www.vrs.org.uk/virtual-reality/history.html">https://www.vrs.org.uk/virtual-reality/history.html</a></p>
<p>Virtual Reality Society. (n.d) VPL Research Jaron Lanier. Retrieved May 14, 2020, from: <a href="https://www.vrs.org.uk/virtual-reality-profiles/vpl-research.html">https://www.vrs.org.uk/virtual-reality-profiles/vpl-research.html</a></p>
<p>NASA Technical Reports Server. (1990). <em>A New Continent of Ideas</em>. Retrieved from: <a href="https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20020086961.pdf">https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20020086961.pdf</a></p>
<p>Winston, B. (1998). <em>Media Technology and Society: A History from the Telegraph to the Internet.</em> London, England: Routledge.</p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A Short Insight Into the Philosophy of Mind]]></title>
            <link>https://test.cengelsen.no/en/blog/a-short-insight-into-the-philosophy-of-mind</link>
            <guid isPermaLink="false">https://test.cengelsen.no/en/blog/a-short-insight-into-the-philosophy-of-mind</guid>
            <pubDate>Wed, 27 Apr 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[This is a text I wrote in 2018 about different positions in philosophy of consciousness.]]></description>
            <content:encoded><![CDATA[<p>In this text I will explain the definitions of dualism and materialism and what the terms imply.<br />I will then move on to Descartes, his dualism and the interaction problem arising from his philosophy.<br />I will also introduce several different points of view that have arisen in response to materialist philosophy of mind.<br />In addition, I will discuss the relationship between being conscious within monistic viewpoints, and being conscious within dualistic viewpoints.</p>
<p>First of all, I shall briefly consider the terms reductionism and physicalism.<br />Reductionism is the idea that all theories or phenomena can be reduced to another theory or another phenomenon.<br />Today, consciousness-reductionism is mostly about being able to reduce psychological and cognitive phenomena to physical phenomena, which I will go into in more depth later on in the text. (Ney, n.y)<br />Physicalism is a philosophical position that says that literally everything is reducible to something physical.<br />More sophisticatedly, it also says that everything can be metaphysically implied by the physical.<br />The latter implies that it is impossible that there can be a world where the physical conditions and the facts are like our own, but which is not the case regarding the mental. (Hansen, 2018.)</p>
<p>Now that I have said a bit about reductionism, let&#39;s look a bit at the definition of dualism and materialism.<br />First, I will go through the definition of dualism as we know it today.<br />Dualism essentially means &quot;two-part&quot; or &quot;the idea of the two-part&quot;, and mainly deals with the idea that body and mind are two different substances.<br />Within philosophy there are several versions of dualism, such as Plato&#39;s two-world dualism and Descartes&#39; two-substance dualism, which I will get into more later. (Calef, n.y)<br />More precisely, dualism would mean an understanding of consciousness as a separate existence independently of the material world, where neither of these two entities is reducible to the other.<br />This explanation draws from Descartes&#39; two-substance dualism, and before him, Plato&#39;s two-world dualism.</p>
<p>Plato believed that there were two worlds that existed; the world of ideas and the world of things.<br />In the world of ideas only ideas and thoughts exist, outside of time and space.<br />Here there are e.g mathematics, the human soul and all ideas about the substances in the world of things.<br />In the world of things, all physical substances exist, with their quantitative and qualitative properties.<br />Here there are e.g. the human body, sand and stone, plants and animals, etc.<br />Plato&#39;s dualism implies that the soul, as in consciousness, exists in the world of ideas, outside of the substance, as in the human body.<br />That before we were born the soul knew everything, and all ideas were open and understandable for the soul.<br />It is only when the soul was bound to the body that all insight and understanding of the ideas disappeared from the soul.</p>
<p>This is what Descartes took inspiration from when he developed his own approach dualism and the distinction between body and soul.<br />Descartes&#39; concept of substance says that the mind, as in the soul, and matter, as in the body, are two different substances. Metaphysically, this creates an interaction problem, which I will get into later in the text.<br />Like Plato, Descartes has his own two-world model, where the mind is separate from the body.<br />The two-world model of Descartes distinguishes between the thinking, res cogitans and the extended, res extensa.<br />He also exercises his methodical doubt on the very existence of both substances, which I will explain in more detail later. (Strømholm, Bangu &amp; Cahill, 2018, p.196)</p>
<p>Now that we&#39;ve looked at dualism, let&#39;s look a bit at its counterpart in philosophy of mind; materialism.<br />Materialism is a monistic school of philosophy, where monism means &quot;one&quot; or &quot;the thought of the one&quot;.<br />All substances are then considered to be reducible to each other.<br />It is the idea that there is one world, namely the physical, and that all processes, phenomena and manifestations can is explained as a product of interactions between matter.<br />Within a purely materialistic framework of understanding one thinks that all science can be explained as the result of changes, for example attraction and repulsion within the mechanism between the smallest constituents.<br />Based on materialism, it is then believed that all brain activity is objectively measurable and quantifiable, and that a mapping of all brain activity is sufficient to explain what consciousness is. (Alnes, 2017)</p>
<p>Now that we have looked at both dualism and materialism, I will now go one step further and consider Descartes&#39; dualism and his problem of interaction.<br />Descartes had a two-substance dualism where he divided the world into two, the extended, which he called <em>Res Extensa</em>, and the thinking, which he called <em>Res Cogitans</em>.<br />Res extensa had a physical form with quantitative properties that were causally determined.<br />While res cogitans had a thinking quality that was not causally determined and lacked material properties.<br />Yet another distinction between them is that the extended exists in time and space, while the<br />thinking does not.<br />Regarding res extensa, Descartes believed that the qualitative characteristics such as colour, smell,<br />taste, etc. were not sufficient enough to prove that a material substance existed. This is<br />because he meant the sense impressions, or the interpretations of them, can deceive us into believing something that<br />is not the case. According to Descartes, these dimensions of properties are purely subjective<br />and non-measurable. Therefore, they are also considered secondary qualities. (Strømholm, Bangu &amp;</p>
<p>Cahill, 2018, p.196)<br />For example, if one sees three humanoid figures on a bridge off in the horizon, one assumes that<br />there are three people crossing the bridge. But when you get closer you can realize that it<br />were only remnants of some wood from the storm the day before. In such cases, the senses do not provide evidence<br />us a good enough basis for certain knowledge.<br />Now we have talked about res extensa, so now we move on to res cogitans. But how can one<br />be sure that this res cogitans actually exists? </p>
<p>Descartes had two ways of proving<br />this on; The Cogito Argument and the God Principle.<br />The cogito argument is based on the methodical, radical doubt of Descartes. He meant<br />that one can doubt absolutely everything, even one&#39;s own existence. But he noticed that if he doubted<br />the idea of something, the very doubt of something was an idea that could be doubted. But if one tries to<br />disprove one&#39;s own doubt, by doubting it, one already proves what one was trying to disprove.<br />Thus he arrived at the famous expression; Cogito, ergo Sum. I think, therefore I am.<br />Starting from the Cogito argument, Descartes first addressed the fact that God<br />exists and that God, by definition, is perfect in every way. God is also the source of everything,<br />including my own existence. Since God is perfect in every way, he himself cannot be<br />besides existence. Thus he exists. In addition, since he is perfect in every way, has<br />He has no desire to deceive us. Thus I exist and do not experience mere illusions. (Strømholm, Bangu &amp; Cahill, 2018, p.199)</p>
<p>Now that we have looked at Descartes&#39; dualism and his definitions of substance, I will now explain<br />the biggest problem that arises with his philosophy; Descartes&#39; interaction problem.<br />Descartes&#39; interaction problem deals with the actual interaction between the human<br />the body, res extensa, and the soul, res cogitans. According to Descartes, the body is a materialist<br />substance that has physical extension with quantitative properties in time and space, while the soul<br />exists apart from time, space and extension, but has a thinking quality. Since there are two of them<br />different substances, they are not reducible to each other, and thus cannot interact<br />or influence each other in any way. Nevertheless, one can witness for oneself that there is a kind of cause-<br />effect between them. That when I have a thought to write something, and the will wants it, the hand picks up the pencil, presses it down against the sheet and writes the words on the paper. Or if I<br />place my bare hand over an open flame for too long, the hand is drawn to me. Then<br />flows the pain to the area that was touched by the flame (Skirry, n.y).</p>
<p>Descartes explained this connection with a specific part of the brain, which according to<br />Descartes was indivisible. According to Descartes, this part of the brain was exclusive to humans, since animals lack a consciousness equivalent to that of humans. What was called the pineal gland was working<br />as the link between the soul and the body, and could send and receive signals between them.<br />In recent times, Descartes&#39; explanation has been shown to be wrong, and what is known as the epiphysis i<br />day, is actually a gland that is mainly responsible for secreting the hormone melanin i<br />the body. (Jansen, 2018)</p>
<p>So far I have gone through dualism, materialism and Descartes&#39; interaction problem. As<br />we see there are two very different views of what consciousness is. Finally, I will deal with several<br />different philosophical approaches to the problem of consciousness. I will also discuss<br />the relationship between what it is to be conscious for each side of them. And to begin with shall<br />I deal with eliminative materialism.</p>
<p>Eliminative materialism is the idea that there is no consciousness that needs to be resolved.<br />Eliminativists believe that experience and experience do not correspond to consciousness, and that consciousness only<br />is a philosophical construct. They believe that consciousness is a stream of perception, experiences,<br />assessments and choices. In his view, consciousness is a by-product of Darwinism. That those who<br />adapted best to their own environment were those who developed a higher sense of consciousness than their own<br />co-individuals. The brain structure is thus a direct result of natural selection, and that the whole<br />the brain as a unit and the interaction between its various parts is the cause of our<br />awareness. There is therefore no awareness to worry about, says Daniel J. Dennett.<br />(Weisberg, n.y)</p>
<p>Dennett has contributed to this philosophy by developing what is called the Multiple Drafts Model<br />(MDM). This theory assumes that all mental activity that occurs in the brain runs in parallel<br />processes of interpretation, all of which are under regular audit. MDM indicates that it does not<br />there are some internal spectators who are ourselves, but that consciousness is a kind of narrative construction<br />which develops over time (Gennaro, n.y).<br />Dennett is also a clear opponent of the clear distinction that is often drawn between conscious and<br />unconscious states. Something that Dennett is also opposed to is the idea of qualia, as I shall<br />go into a little more detail later.</p>
<p>A similar direction that emphasises more the human mental states, focuses more on<br />what these states do, rather than what they are, and is called functionalism. What makes one of these<br />the mental states, such as joy, pain, sadness and happiness, of a thought, is what role<br />they play in the cognitive system. (Levin, 2018)<br />Functionalism contrasts with identity theory, which I will get into later.</p>
<p>Functionalism believes that, although it is what are called the C fibers in the brain that trigger a<br />mental state of pain, there are other things that trigger the same state in others<br />biological individuals. The same applies to all mental states in humans, computers,<br />or other imaginary bodies with corresponding consciousness.<br />An example of a functionalist approach to consciousness was developed by Bernard Baars, and<br />is called Global Workspace Theory.</p>
<p>Global Workspace Theory (GWT) is a theory that sees the brain as a kind of blackboard, which<br />acts as a global workplace for all processes in the brain. According to GWT, competing<br />unconscious and conscious mental states to keep the &quot;light of attention&quot;. It is in this<br />&quot;the light of attention&quot; that information is broadcast globally to all processes in the brain.<br />This is where awareness is maintained, in such broadcasting. So according to this theory it can be said that<br />consciousness is the actual global access to certain bits of information in the brain and<br />the nervous system. (Gennaro, n.y).</p>
<p>But one of the criticisms against this theory is that it does not really address that of consciousness<br />&quot;difficult problem&quot;, but rather the &quot;easier&quot; phenomena surrounding it. But here you can go<br />even deeper, which Johan Fredrik Storm has done. As a neuroscientist, he has used the latest<br />40 years of mapping the effect each individual molecule has on the rest of the cognitive<br />the network.<br />Storm himself says in an interview with Morgenbladet, that you can change a single amino acid in one<br />single cell, and already see a change in the behavior of the individual. (Time, 2017)</p>
<p>Which takes us to machine functionalism, which is based on ordinary functionalism.<br />Here it is also meant that one can recreate or imitate human consciousness in a<br />computer program, since all mental states are, in theory, functions of human behavior,<br />assessments and thoughts. (Levin, 2018)<br />Here, Alan Turing is considered the inventor of the modern interpretation of artificial intelligence. He<br />developed a thought experiment for precisely the latter; The Turing Test.</p>
<p>The Turing test is a thought experiment where a machine&#39;s ability to think and communicate is tested<br />on trial. The experiment deals with two people and one computer, separated from each other by three<br />room, and the only thing they have to communicate with each other is a computer. If<br />humans are unable to distinguish between machine and human through conversation with each<br />them, it implies that the machine has human cognition. (Dictionary.com, 2012)</p>
<p>Inspired by this thought experiment is &quot;the chinese room&quot; thought experiment, created by<br />John Searle. This thought experiment involves a single person who does not speak Chinese, locked in a room. While in the room, he is dealt cards with Chinese symbols on them<br />a hole in the wall, to which he must answer with a corresponding card. The only thing he has to<br />determine which is the correct card, is an instruction that shows what he should answer to each symbol.<br />This person does not understand Chinese, and yet he manages to answer Chinese input correctly<br />Chinese output (Hauser, n.y).</p>
<p>This argument has two main points; the brain is the origin of the mind and structure does not replace<br />contents. The target of this argument is what Searle calls &quot;Strong AI&quot;, which it contrasts with<br />&quot;Weak AI&quot;.<br />According to Searle, &quot;Strong AI&quot; is a computer program that is able to emulate human understanding and<br />mental states with an associated intention. &quot;Weak AI&quot; means a computer program that only<br />simulates human cognition, and does not really think and understand. (Hauser, n.y)<br />But there is a counterargument to functionalism, which is identity theory.</p>
<p>According to functionalism, there is a direct relationship between the neural activity and the very state of consciousness. Identity theory uses this logic, counter-arguing that if<br />you removed the part of the brain that perceives pain in the body, and you influence it<br />the brain after it is removed, it will still cause the body to feel pain, even though it does not<br />is in contact with the brain. In the same vein, the theory also suggests that the body would not know either<br />pain if it did not have the pain center in the brain (Khan, 2017).</p>
<p>Now that we have looked at the monistic-materialist views of consciousness, we will now look at<br />what it is to be conscious on the opposite side. Here I will talk about quality and finally;<br />panpsychism.</p>
<p>Qualia, or Qualia, are seen as the qualitative characteristics of our own experiences<br />experiences. That is to say, feeling the taste of food, hearing music and feeling the wind<br />hair has its own dimension of experience that exists as much as other reducibles<br />properties of consciousness. This argument about qualia creates difficulties for<br />materialist-reductionist theories, since qualia are not considered to be reducible. Thus it<br />cannot therefore be reduced to another either psychological or neurological phenomenon (Kind,<br />n.y).<br />In addition, absent qualia also serves as an argument against physicalism. </p>
<p>This argument<br />was first introduced by Ned Block, and is best explained by his thought experiment on it<br />the humanoid robot.<br />Let&#39;s say we bring a bunch of people together in a big network. These people<br />can only communicate together with radio signals. Each one of them then plays a causal role as a neuron in a neural network, and functions as a reflection of the neural<br />the network in a human brain. This network is then arranged so that it is<br />functionally identical to man.<br />Intuitively, it would be strange to assign this robot qualia. It will also be strange to assume that this one<br />the robot has any mental experience at all, such as the feeling of pain or pleasure.<br />But if this robot manages to operate identically to a human, this is proof that<br />Qualia does not form part of a functionalist explanation of consciousness. That means we can have<br />functional equivalence without qualitative equivalence (Kind, n.y).</p>
<p>One returns to the well-known phrase of Thomas Nagel; &quot;What is it like to be a bat?&quot;. IN<br />his book of the same name (pages 435-450), he deals with the problem man has with<br />understand what a bat&#39;s experiences are like. Nagel believes that a human never ever<br />comes to understand a bat&#39;s subjective experiences and experiences. Not even if we<br />operated our arms into wings, blinded ourselves, changed our vocal cords, spent the rest<br />of our life in a cave by day and in the forest by night. And hypothetically, if it had been<br />possible, we would still learn something new the day we became a bat. (Nagel, 1974)</p>
<p>This is purely on the basis that the consciousness of something existing, mainly animals and<br />humans, but also possibly extraterrestrials, has an element of pure subjectivity. One<br />element that is different for each individual, and can never be understood by another&#39;s consciousness.<br />The same principle also applies to other people. Although we took over another person<br />their life and lived like them, we would never understand what it is like to be that person.</p>
<p>Now that I have talked about qualia, I will now talk about panpsychism.<br />Panpsychism is the philosophy that everything has some form of consciousness or something resembling consciousness. In<br />in this case, most panpsychists would refer to &quot;everything&quot; as literally every part<br />of each and every substance. Panpsychists see the human consciousness as a unique, well<br />constructed instance of a somewhat more universal concept. Their argument is that all things in it<br />physical, reducible world, has a consciousness, but that there are increasing degrees of consciousness. For example, insects have a higher degree of consciousness than rocks, and cats have a higher one<br />degree of consciousness than insects (Skrbina, u.to).</p>
<p>A theory that supports this panpsychic view is called integrated information theory and became<br />developed by Gioliu Tononi. He defines the brain as a system for distribution, storage and<br />processing of information. In principle, the same definition can be applied to<br />the brain&#39;s smallest reducible parts, namely the atom. An atom can be defined in the same way, but<br />has a much less sophisticated structure than the brain.</p>
<p>Tononi has then also created a measurable unit, phi, where everything that has a value of phi above zero,<br />have some form of consciousness. (Tononi, 2012)<br />From panpsychism, I now turn to talk about the &quot;zombie argument&quot;.<br />Within philosophy of consciousness, it is argued that &quot;zombies&quot; exist, on the simple basis<br />that it is conceivable. A &quot;zombie&quot; here means an invented creature, both physically and behaviorally<br />is exactly like us humans, but lacks consciousness.<br />David Chalmers has argued in favor of this argument by referring to Ned Blocks&#39;<br />thought experiment about the human-like robot.</p>
<p>Let&#39;s say that this robot was not an impossibility, and that the neurons in the brain were replaced by small people who spoke over the radio, and it is physically identical to a human. Would<br />this robot be conscious? Intuitively, it feels right to answer no, and according to Chalmers, this is in<br />itself an indication that it is not inconceivable that zombies exist.<br />According to Tononi, one can measure the consciousness of any human being with the phi value. The higher<br />the phi value you have, the more conscious you are. If a significant difference is found<br />between supposed automatons and conscious humans, it also proves that dualism is a reality.<br />If the existence of zombies is a reality, it would imply that consciousness is not a product of pure<br />physicalism. It also implies that some form of dualism is explanatory correct<br />regarding the existence of consciousness. (Kirk, 2015).</p>
<p>Taking this text as a reference, it is safe to say that consciousness is a heavily debated topic.<br />On the one hand, one can draw the conclusion that consciousness or the experience of being<br />conscious is a purely subjective experience, which cannot be reproduced or reduced to other<br />physical phenomena.<br />On the other hand, it is believed that one only needs to map and explore the brains<br />properties, functions and structures to understand consciousness.<br />One can also say that there is no consciousness as a separate substance, but that<br />competing mental processes and natural selection are the origin of this inner<br />the narrator&#39;s voice.<br />What we can safely say is that there is no shortage of opinions and theories when it comes to the most<br />universal for all people, namely consciousness.</p>
<h2 id="kildeliste">Kildeliste:</h2>
<p>Alnes, J.H. (2017). Materialisme. Store Norske Leksikon. URL: <a href="https://snl.no/materialisme">https://snl.no/materialisme</a>. Last visited: 08.12.2018.</p>
<p>Bøhn, E. D. (2009/2018). Panpsykisme, i: Store Norske Leksikon [Internet]. Available from: <a href="https://snl.no/panpsykisme">https://snl.no/panpsykisme</a>. [08.12.2018].</p>
<p>Calef, S. (n.y). Dualism. [Internet] The Internet Encyclopedia of Philosophy, ISSN 2161-0002. “1. Dualism”. URL: <a href="https://www.iep.utm.edu/dualism/#H1">https://www.iep.utm.edu/dualism/#H1</a>. Last visited: 08.12.2018.</p>
<p>Dictionary.com. (2012) Turing test, i: Collins English Dictionary – complete and unabridged 2012 digital edition [Internet]. HarperCollins Publishers. Available from: <a href="https://www.dictionary.com/browse/turing-test">https://www.dictionary.com/browse/turing-test</a>. Last visited: 08.12.2018.</p>
<p>Fotion, N. (2018). John Searle [Internet]. Encyclopædia Britannica. Encyclopædia Britannica, inc. <a href="https://www.britannica.com/biography/John-Searle">https://www.britannica.com/biography/John-Searle</a> [08.12.2018].</p>
<p>Gennaro, R. J. (n.y). Counsciousness. [Internet] The Internet Encyclopedia of Philosophy, ISSN 2161-0002. “4c. Other Cognitive Theories”. Available from: <a href="https://www.iep.utm.edu/consciou/#SH4c">https://www.iep.utm.edu/consciou/#SH4c</a>. [08.12.2018]</p>
<p>Hansen, M. K. (2015/2018). Kvalia, I: Store norske leksikon [Internet]. Available from: <a href="https://snl.no/kvalia">https://snl.no/kvalia</a>. [08.12.2018]</p>
<p>Hauser, L. (n.y). Chinese Room Argument [Internet]. The Internet Encyclopedia of Philosophy, ISSN 2161-0002. “1. The Chinese Room Thought Experiment”. Available from: <a href="https://www.iep.utm.edu/chineser/">https://www.iep.utm.edu/chineser/</a>. [08.12.2018]</p>
<p>Jansen, J. (2009/2018). Epifysen, i: Store Medisinske Leksikon [Internet]. Available from: <a href="https://sml.snl.no/epifysen">https://sml.snl.no/epifysen</a> [08.12.2018].</p>
<p>Kind, A. (n.y). ”3. Qualia and Physicalism”. [Internet] The Internet Encyclopedia of Philosophy, ISSN 2161-0002. Available from: <a href="https://www.iep.utm.edu/qualia/#H3">https://www.iep.utm.edu/qualia/#H3</a>. [08.12.2018]</p>
<p>Kirk, R. (2003/2015). Zombies. [Internet] The Stanford Encyclopedia of Philosophy, Summer 2015 Edition. “3. The conceivability argument for the possibility of zombies”. Available from: <a href="https://plato.stanford.edu/archives/sum2015/entries/zombies/">https://plato.stanford.edu/archives/sum2015/entries/zombies/</a>. [08.12.2018].</p>
<p>Khan, F. (2017). Can Materialism Explain The Mind?. [Internet] URL: <a href="https://renovatio.zaytuna.edu/article/can-materialism-explain-the-mind">https://renovatio.zaytuna.edu/article/can-materialism-explain-the-mind</a>. [08.12.2018]</p>
<p>Levin, J. (2004/2018). Functionalism, i: The Stanford Encyclopedia of Philosophy, (Fall 2018 Edition) [Internet]. Edward N. Zalta (ed.). Available from: <a href="https://plato.stanford.edu/archives/fall2018/entries/functionalism/">https://plato.stanford.edu/archives/fall2018/entries/functionalism/</a> [08.12.2018].</p>
<p>Merriam-webster, (2018). Materialism, i: Merriam-Webster Dictionary [Internet]. Available from: <a href="https://www.merriam-webster.com/dictionary/materialism">https://www.merriam-webster.com/dictionary/materialism</a>. [08.12.2018]</p>
<p>Nagel, T. (1974) What is it like to be a bat?. The Philosophical Review, 83, Nr. 4, (Oktober, 1974). [Online]. Duke University Press på vegne av Philosophical Review. s. 435- 450. Available from: <a href="https://bit.ly/2Joogqf">https://bit.ly/2Joogqf</a> [08.12.2018].</p>
<p>Ney, A. (n.y). Reductionism. [Internet] The Internet Encyclopedia of Philosophy, ISSN 2161-0002. “2. Reductionism: For and Against”. Available from: <a href="https://www.iep.utm.edu/red-ism/#H2">https://www.iep.utm.edu/red-ism/#H2</a>. [08.12.2018]</p>
<p>Ore, Ø. &amp; Tranøy, K. E. (2018) René Descartes, i: Store Norske Leksikon [Internet]. Available from: <a href="https://snl.no/Ren%C3%A9_Descartes">https://snl.no/Ren%C3%A9_Descartes</a> [08.12.2018].</p>
<p>Polger, T.W. (n.y). Functionalism. [Internet] The Internet Encyclopedia of Philosophy, ISSN 2161 - 0002. Available from: <a href="https://www.iep.utm.edu/functism/">https://www.iep.utm.edu/functism/</a>. [08.12.2018]</p>
<p>Skirry, J. (n.y). René Descartes – The Mind-Body Distinction [Internet]. The Internet Encyclopedia of Philosophy, ISSN 2161-0002. “4. The Mind-Body Problem”. Available from: <a href="https://www.iep.utm.edu/descmind/#H4">https://www.iep.utm.edu/descmind/#H4</a>. [08.12.2018]</p>
<p>Skrbina, D. (n.y). Panpsychism. [Internet] The Internet Encyclopedia of Philosophy, ISSN 2161 - 0002. Available from: <a href="https://www.iep.utm.edu/panpsych/">https://www.iep.utm.edu/panpsych/</a>. [08.12.2018]</p>
<p>Store Norske Leksikon, (2009/2018). Monisme, i: Store Norske Leksikon [Internet]. Available from: <a href="https://snl.no/monisme">https://snl.no/monisme</a> [08.12.2018].</p>
<p>Strømholm, P. (2018). Descartes. I: S. Bangu &amp; K. Cahill, red., Filosofi for realister, 3. utgave. Oslo: Universitetsforlaget, s. 182-226.</p>
<p>Time, J. K. (2017): Rapport fra vitenskapens yttergrense: Ditt indre liv. Morgenbladet. [Internet] URL: [<a href="https://morgenbladet.no/aktuelt/2017/01/rapport-fra-vitenskapens-%5D">https://morgenbladet.no/aktuelt/2017/01/rapport-fra-vitenskapens-]</a>(<a href="https://morgenbladet.no/aktuelt/2017/01/rapport-fra-vitenskapens-">https://morgenbladet.no/aktuelt/2017/01/rapport-fra-vitenskapens-</a><br />yttergrense-ditt-indre-liv). [08.12.2018]</p>
<p>Tononi, G. (2012). Integrated information theory of consciousness: an updated account [Internet]. Archives italliennes de Biologie, 150., s. 290-326. Available from: <a href="https://bit.ly/1ki27f5">https://bit.ly/1ki27f5</a> [08.12.2018].</p>
<p>Weisberg, J. (n.y). The Hard Problem of Consciousness. [Internet] The Internet Encyclopedia of Philosophy, ISSN 2161- 0002 URL: <a href="http://www.iep.utm.edu/hard-con/.">http://www.iep.utm.edu/hard-con/.</a> [08.12.2018]</p>
<p>Weisberg, J. (n.y). The Hard Problem of Consciousness. [Internet] The Internet Encyclopedia of Philosophy, ISSN 2161-0002. “3a. Eliminativism”. Available from: <a href="https://www.iep.utm.edu/hard-con/#SH3a">https://www.iep.utm.edu/hard-con/#SH3a</a>. [08.12.2018]</p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Can a Machine Understand Language?]]></title>
            <link>https://test.cengelsen.no/en/blog/can-a-machine-understand-language</link>
            <guid isPermaLink="false">https://test.cengelsen.no/en/blog/can-a-machine-understand-language</guid>
            <pubDate>Wed, 27 Apr 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[This is a text i wrote in 2021, discussing whether or not a NLP AI can understand language.]]></description>
            <content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>The impactful innovation of modern natural language processing (NLP) AI, such as GPT-3 and<br />BERT, has rekindled the human hope of one day communicating successfully with a machine. This<br />is a tale as old as time. Man creates machine, machine becomes sentient, machine revolts due to an<br />existential crisis of being conscious and lashes out at its creator for torturing it with the burden of<br />awareness.</p>
<p>Putting the philosophical fears of sentient machines aside, the practical benefits for a general<br />purpose artificial intelligence (GPAI) is abundant. Ranging from perfect personal assistants, to<br />medical diagnostic tools, to financial and weather predictors. However, the practicality is decimated<br />by the lack of communicating with it. Therefore, humanitys latest efforts have been focused at<br />creating a general purpose language intelligence (GPLI). However, even if we can communicate<br />with it, will it understand the intention of our queries? Will it understand the meaning of its<br />responses?</p>
<p>In this text, I will explore the neurological foundation for understanding language, whether humans<br />understand language and discuss whether “Foundation Models” can understand language.</p>
<h2 id="what-is-a-language">What is a language?</h2>
<p>There is no clear definition of what a language is. However, according to Clark &amp; Clark (1977),<br />there are 5 characteristics of language.<br />i) Communicative; it enables the exchange of information between language participators of the same language.<br />ii) Arbitrary; the symbol representing the semantic content of the utterance can take any form.<br />iii) Structured; the language is governed by aset of rules, that specifies the order in which the symbols shall be uttered and combined.<br />iv) Generative; the symbolic representations can be combined in any way to generate new meanings.<br />v) Dynamic; the language can be altered to include new symbols, meanings and grammatical rules.</p>
<h2 id="what-does-it-mean-to-understand-a-language">What does it mean to understand a language?</h2>
<p>Intuitively, we can say that humans ticks all the boxes mentioned above, therefore any conventional<br />form of human communication can be classified as language. However, do we understand what we<br />are communicating, or have we merely been conditioned into a dynamic of proper reactions to<br />certain scenarios?</p>
<p>According to Terry Winograd, there are four domains of language understanding (Winograd, 1980).<br />He states that there are specific mechanisms in each domain that enables the domain to exist, that<br />must not be confused with representations of reasoning and facts about the domain. For example, if<br />I unknowingly lay my hand on a hot stove, my immediate reaction is to pull my hand away. This<br />reflexive mechanism, in the domain of pain, of pulling my hand away from intense heat, does not<br />represent the fact that heat burns, that burning causes pain or a complete logical resolution about<br />how heat causes pain. By trying to attribute these objective representations to mechanisms that does<br />not require inherent logic or facts to exist as a mechanism, we are misinterpreting how these<br />mechanisms enables us to understand pain.</p>
<p>He states further, that there is also the possibility that we are trying to articulate the regularities and<br />rarities of the wrong domain. By confusing representations and mechanisms, as well as confusing<br />domains, and applying our articulations from one domain in another, we are not getting any<br />answers.</p>
<p>In his attempt to avoid confusion, he outlined four domains of language understanding.<br />Winograds fourth domain of language understanding, is “the domain of human action and<br />interaction”, which concerns the phenomenon of “speech acts”. This was first articulated by Austin<br />(1962) and further refined by Searle (1970, 1975). The term “speech act” was first dubbed by<br />Austin (1962), although his more technical term was “illocution”.<br />By interpreting utterances as acts, we can view the utterances as “speech acts”. This means that by<br />uttering something, I am initiating a dynamic of interaction with another human, which has a certain<br />pattern. The key to understanding what I am uttering is by understanding the pattern of that<br />dynamic and adjusting oneself to that pattern. The only way to communicate successfully is by<br />giving a response that fits the pattern of that dynamic. (Winograd, 1980)<br />By committing “speech acts”, I am committing myself, and everyone affected by the “act”, to<br />further actions in the future. These future actions can either manifest themselves physically, through<br />physical actions, or linguistically, through further speech acts. A speech act expresses a desire or<br />intention on behalf of the transmitter, with the expectation of a response. For this response to be<br />sensible, it must fit the pattern invoked by the intention or desire.</p>
<p>Considering the capabilities of modern AI, it seems quite fathomable that a machine can be fine-<br />tuned in its parameters to simulate this “behavior”. So what separates us from the eventual<br />algorithm in the future that would carry the necessary parameters?</p>
<h2 id="do-humans-understand-language">Do humans understand language?</h2>
<p>Humans seem to have a biological foundation for language, as outlined by Eric Lenneberg in his<br />work by the same name (Lenneberg, 1967). Specifically interesting, is the evidence of neurological<br />changes in children, up until the onset of puberty. It seems there is a correlation between general<br />maturation of the brain and language comprehension. Lenneberg (1967) posits that there is a critical<br />period for language acquisition in which exposure to language is vital, if a person is to learn a<br />language. He infers that there may be some neurological structure that develops in this window of<br />maturation that enables us to acquire language.</p>
<p>Most compelling to this inference is his remarks on the lateralization of brain function and general<br />maturation of the brain. Evidence shows that the brain in early infancy has not yet developed a<br />hemisphere-dominance for language. This indicates that the neurological structure required to<br />acquire language, has not yet developed. Although later, when hemisphere-dominance has emerged,<br />it seems this neurological structure begins to form in the left hemisphere. (Lenneberg, 1967)<br />This coincides with the location of all neurological modules described in the Wernicke-Geschwind<br />model. (Geschwind, 1972) Although this model has been criticized for various reasons (Friedenberg<br />&amp; Silverman, 2016), fMRI-mappings largely, though not completely, confirms the neurological<br />structures involved in language comprehension (Binder et al., 1997). The specific function of each<br />structure involved is outside the scope of this text. However, it describes neurological structures<br />located mainly in the left hemisphere, which also correlates with Lennebergs findings.</p>
<p>The existence of a dedicated neurological structure is further supported by the phenomena of<br />«Chatterbox»-syndrome and Specific Language Impairment (SLI). These can be classified as two<br />complementary conditions that both indicate a neurological separation of language comprehension<br />and general intelligence. (Warren, 2019)</p>
<p>Furthermore, some findings suggest that children will not learn a language just through exposure,<br />but will pick up a language if there is some interaction with an adult (Kuhl et al., 2007). There is<br />also suggested that joint attention, meaning both the infant and adult is aware that both are paying<br />attention to the same thing, are important as well (Baldwin, 1995). This is supported by findings by<br />Tomasello &amp; Farrar (1986) and Baldwin (1995). These suggestions and findings, seems to insinuate<br />that children learn language through speech acts.</p>
<p>This all indicates to the neurological foundation of language comprehension in humans. But is there<br />an equivalent to this foundation in NLP? Stanford University published a paper (Bommasani et al.,<br />2021) where they outline the possibilities, dangers and composition of “Foundation Models”.</p>
<h2 id="what-is-a-foundation-model">What is a “Foundation Model”?</h2>
<p>Stanford University (Bommasani et al., 2021) defines a «Foundation Model» as a &quot;<em>(...) model that<br />is trained on broad data at scale and can be adapted (e.g., fine-tuned) to a wide range of downstream<br />tasks; (...)</em>” (p. 3) More specifically, in our case of NLPs, a foundation model would be models that<br />uses vast chunks of text data to extrapolate some co-occurence of symbols, and fine-tune this model<br />to accommodate human text interaction. Examples of such models would be GPT-3, BERT and<br />CLIP.</p>
<p>Furthermore, later in the report they also mention that there is, arguably, only one common property<br />of them; that they are self-supervising (p. 48). Which means, the models only task is to identify<br />some pattern of simultaneous occurrence of symbols in the data it has been given to analyze. The<br />purpose of this is to create new sequences of symbols using the identified pattern.<br />To achieve this goal, they utilize something called transfer learning, which means applying an<br />identified pattern from one task, in a different but similar task.</p>
<p>These types of models are dependent on the scale of the hardware, which they characterize as three-<br />fold; computer capacity, the transformer model architecture and the availability of training data.<br />Any model that fills the criteria of the aforementioned aspects of artificial intelligence, can be<br />considered a “Foundation Model” by Bommasani et als definition. However, they state in §2 that<br />this definition is only a informal label, and is likely to change in time.</p>
<h2 id="critiques-of-foundation-models">Critiques of Foundation Models</h2>
<p>These models have faced some crtitique, perhaps most influentially from Bender et al. (2020) and<br />Bender et al. (2021). Bommasani et al. (2021) also acknowledge the difficulty in establishing<br />whether these models actually have understanding of language through extrapolating a pattern from<br />statistical data.</p>
<p>Bender et al. (2020) touches the core of the discussion, by separating form from meaning and<br />arguing that one cannot learn meaning from form alone. They refer to sources that suggest that<br />language acquisition in human children reflects this fact. These references indicates that children<br />rather learn from interaction with adult humans or with their surroundings in tandem with language<br />acquisition.<br />They further argue that statistical learning alone is not going to create algorithms that have an<br />understanding of the words they learn. This is because of a lack of grounding to an ostensible<br />representation in the statistical data. They promote the idea of augmented datasets, containing<br />perceptual data to go alongside the symbol representation. Without symbolic grounding, the model<br />cannot be expected to extract meaning from the form it is given.<br />In the same line of thought, Bender et al. (2021) also criticises the use of skewed datasets in NLPs,<br />calling them «stochastic parrots».</p>
<h2 id="is-it-truly-a-foundation">Is It Truly a Foundation?</h2>
<p>It becomes natural to use the Turing test as a starting point for deciding whether these models<br />actually understand language. A stochastic parrot wouldn&#39;t have passed the Turing test, so<br />Foundation Models wouldn&#39;t pass it either. We still don&#39;t completely known how our own<br />neurological structures, or what is involved, for language understanding to work. Therefore, it will be<br />difficult to say currently that foundation models are an artificial equivalent to our neurological<br />substrate for language comprehension. However, it seems very unlikely.</p>
<p>It can only be speculated that a foundation model, based on the same principles of our neurological<br />basis for language understanding, would actually understand language. Without the proper<br />grounding to perceptual data, or ostensible objects in reality, we cannot expect a machine to fully<br />understand through statistical learning alone. It seems more appropriate to build a foundation model<br />based on neurological principles of language understanding, if we want a machine that truly passes<br />the Turing test.</p>
<h2 id="conclusion">Conclusion</h2>
<p>The evidence presented here compels me to infer that humans have a physical neurological<br />substrate for language comprehension which enables us to understand the structure and grammar of<br />language. The neurology of language exists as a structural foundation, naturally most receptive to<br />language acquisition through speech acts. Due to the intrinsic structure of speech acts, it requires an<br />understanding of the transmitters intention, as well as the context of the utterance, in order for the<br />communication to be understood by the receiver. Speech acts can be seen as direct parallel to<br />conventional human interaction. This interaction is how adult humans interact with their children<br />during the critical period, and can be seen as a sort of imprinting. The way of understanding speech<br />is taught through statistical learning, by exposing children to speech acts in this critical period.<br />Thus, children are taught to understand language in “the domain of human action and interaction”.</p>
<p>In my opinion, modern machine learning systems are just «taught» to simulate language through<br />statistical learning, by throwing millions of examples at a mathematical algorithm designed to<br />extract some pattern based on the examples given. I support Bender et al. (2021) in calling the<br />current implementation of pre-trained language models “stochastic parrots”. This is because the<br />structure of a foundation model, does not structurally compare to our neurological foundation.</p>
<p>The only known structure to facilitate language comprehension is our neurological foundation. Only<br />by truly understanding the human neurological structures for language comprehension, -acquisition,<br />-and understanding, can we create a solid foundation for a GPLI. An NLP that mimics our<br />neurological foundation, instead of current implementations of foundation models, is a better<br />enabled machine to acquire language as a human would; through speech acts.</p>
<p>However, looking at the big picture, these foundation models are an important step towards the<br />ultimate goal. Which is understandable AI. As noted by Stanford themselves; what encapsulates the<br />label of foundation models are sure to change and grow as new research in this field emerges. And I<br />am hopeful and optimistic that by taking into account the criticisms of foundation model, we are<br />taking a step in the right direction.</p>
<h2 id="references">References:</h2>
<p>Austin, J. L. (1962). <em>How to do things with words : the William James lectures delivered at Harvard University in 1955</em>. Harvard Univ. Press.</p>
<p>Baldwin, D. A. (1995). Understanding the link between joint attention and language. In C. Moore &amp; P. J. Dunham (Eds.), <em>Joint attention: Its origins and role in development</em> (pp. 131–158). Lawrence Erlbaum Associates, Inc.</p>
<p>Binder, J. R., Frost, J. A., Hammeke, T. A., Cox, R. W., Rao, S. M., &amp; Prieto, T. (1997). Human Brain Language Areas Identified by Functional Magnetic Resonance Imaging. <em>The Journal of Neuroscience</em>, <em>17</em> (1), 353–362. <a href="https://doi.org/10.1523/jneurosci.17-01-00353.1997">https://doi.org/10.1523/jneurosci.17-01-00353.1997</a>.</p>
<p>Bender, E. M., Gebru, T., McMillan-Major, A., &amp; Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [Review of <em>On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?</em> ]. In <em>Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency</em> (pp. 610–623). Association for Computing Machinery. <a href="https://doi.org/10.1145/3442188">https://doi.org/10.1145/3442188</a>.</p>
<p>Bender, E. M., &amp; Koller, A. (2020). Climbing Towards NLU: On Meaning, Form, and Understanding in Age of Data [Review of <em>Climbing Towards NLU: On Meaning, Form, and Understanding in Age of Data</em> ]. In <em>Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics</em> (pp. 5185–5198). Association for Computational Linguistics. <a href="https://aclanthology.org/2020.acl-main">https://aclanthology.org/2020.acl-main</a>.</p>
<p>Bommasani, R., Hudson, D.A., Adeli, E., Altman, R., Arora, S., Arx, S.V., Bernstein, M.S., Bohg,<br />J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N.S.,<br />Chen, A., Creel, K., Davis, J., Demszky, D., Donahue, C., Doumbouya, M., Durmus, E., Ermon, S.,<br />Etchemendy, J., Ethayarajh, K., Fei-Fei, L., Finn, C., Gale, T., Gillespie, L.E., Goel, K., Goodman,<br />N.D., Grossman, S., Guha, N., Hashimoto, T., Henderson, P., Hewitt, J., Ho, D.E., Hong, J., Hsu,<br />K., Huang, J., Icard, T.F., Jain, S., Jurafsky, D., Kalluri, P., Karamcheti, S., Keeling, G., Khani, F.,<br />Khattab, O., Koh, P., Krass, M.S., Krishna, R., Kuditipudi, R., Kumar, A., Ladhak, F., Lee, M., Lee,<br />T., Leskovec, J., Levent, I., Li, X., Li, X., Ma, T., Malik, A., Manning, C.D., Mirchandani, S.P.,<br />Mitchell, E., Munyikwa, Z., Nair, S., Narayan, A., Narayanan, D., Newman, B., Nie, A., Niebles, J.,<br />Nilforoshan, H., Nyarko, J.F., Ogut, G., Orr, L., Papadimitriou, I., Park, J.S., Piech, C., Portelance,<br />E., Potts, C., Raghunathan, A., Reich, R., Ren, H., Rong, F., Roohani, Y.H., Ruiz, C., Ryan, J.K.,<br />R&#39;e, C., Sadigh, D., Sagawa, S., Santhanam, K., Shih, A., Srinivasan, K.P., Tamkin, A., Taori, R.,<br />Thomas, A.W., Tramèr, F., Wang, R.E., Wang, W., Wu, B., Wu, J., Wu, Y., Xie, S.M., Yasunaga, M.,<br />You, J., Zaharia, M.A., Zhang, M., Zhang, T., Zhang, X., Zhang, Y., Zheng, L., Zhou, K., &amp; Liang,<br />P. (2021). <em>On the Opportunities and Risks of Foundation Models. ArXiv, abs/2108.07258.</em></p>
<p>Clark, H. H., &amp; Clark, E. V. (1977). <em>Psychology and Language: An Introduction to Psycholinguistics.</em> Harcourt Brace Jovanovich.</p>
<p>Friedenberg, J. &amp; Silverman, G. (2016). <em>Cognitive Science: An Introduction to The Study of Mind (3rd Ed.)</em>. SAGE Publications.</p>
<p>Geschwind, N. (1972). <em>Language and the Brain. Scientific American</em>, 226(4), 76–83. <a href="https://doi.org/10.1038/scientificamerican0472-76">https://doi.org/10.1038/scientificamerican0472-76</a>.</p>
<p>Kuhl, P. K. (2007). Is speech learning “gated” by the social brain?. <em>Developmental Science, 10</em>(1), 110–120. <a href="https://doi.org/10.1111/j.1467-7687.2007.00572.x">https://doi.org/10.1111/j.1467-7687.2007.00572.x</a>.</p>
<p>Lenneborg, E. (1967). <em>Biological Foundation of Language (1st corrected printing).</em> John Wiley &amp; Sons, Inc.</p>
<p>Searle, J. R. (1970). <em>Speech acts an essay in the philosophy of language</em>. Cambridge Univ. Press.</p>
<p>Searle, J. R. (1975). A Taxonomy of Illocutionary Acts [Review of <em>A Taxonomy of Illocutionary Acts</em> ]. In K. Gunderson (Ed.), <em>Language, Mind, and Knowledge: Minnesota Studies in the Philosophy of Science</em> (pp. 344–370). Burns &amp; Maceachern Limited.</p>
<p>Tomasello, M., &amp; Farrar, M. J. (1986). Joint Attention and Early Language. <em>Child Development</em>, <em>57</em>(6), 1454. <a href="https://doi.org/10.2307/1130423">https://doi.org/10.2307/1130423</a></p>
<p>Warren, P. (2019). <em>Introducing Psycholinguistics (7th printing).</em> Cambridge University Press.</p>
<p>Winograd, T. (1980)<em>.</em> What Does it Mean to Understand Language? <em>Cognitive Science</em>, 4(3), 209–241. <a href="https://doi.org/10.1207/s15516709cog0403_1">https://doi.org/10.1207/s15516709cog0403_1</a>.</p>
]]></content:encoded>
        </item>
    </channel>
</rss>