I for one welcome our new robot overlords…
2023-10-05
Many of the changes in the distribution of labor across sectors have been driven by changes in the efficiency of human labor, which can be considered as the ratio of production to cost.
Force Multiplier: An external tool or resource that allows you to do more work with less effort.1 This might increase production of a single worker, reducing the overall number of workers needed to perform a job.
Outsourcing: Outsourcing refers to the business practice of contracting out certain tasks or functions to third-party service providers instead of performing them in-house. One of the primary reasons companies outsource is to achieve cost reductions. Labor, infrastructure, or operational costs might be lower in another location or with a specialized provider.
What are common features of the sectors that displaced Agriculuture and Manufacturing?
Knowledge work 1 involves tasks that are information-based and Cognitive in nature :
Aspects of human cognitive labor:
Here is a contract and our contract guidelines. Review and red-line the contract.
Does this Animal Care protocol conform to the federal guidelines?
I have a fever and a cough. What is wrong with me?
Gather data from these documents and summarize/analyze it.
How does this job applicant align with the needs described in our job description?
These tasks are difficult to automate because their inputs are complex, unstructured, and highly variable. They require human Cognitive Labor.
“The future is already here — it’s just not very evenly distributed.” 1
Summarizing the history of AI and creating a visual timeline is an example of Knowledge Work. It requires Cognitive Labor on my part. Cognitive labor is hard.
Making a graph is hard.
Except it isn’t..
Generative Artificial Intelligence describes a group of algorithms or models that can be used to create new content, including text, code, images, video, audio, and simulations.
Examples:
ChatGPT: Text to Text Generative Pretrained Transformer with a chat interface
MidJourney: Text to Image
an abstract representation of an unemployment line caused by the emergence of artificial general intelligence, glowing blue computational network, photorealistic, dark technology, dark academia
KaiberAI: text to video
A Dystopian cyberpunk future where humans battle against cybernetic abominations from the works of the Lovecraftian Cthulhu Mythos. in the style of Photo real, hyper-realistic, high dynamic range, rich colors, lifelike textures, 8K UHD, high color depth, Nikon D 850, Kodak Portra 400, Fujifilm XT
Mubert: text to music.
Hardtechno Dark John Carpenter Synth
{.absolute .width=“60%”}
A type of AI model designed to understand and generate human-like text.
Human Cognitive Tasks
Input Human Output
LLM Operations
Prompt Model Response
Take a large amount of text and produce a smaller output.
Input is larger than the output.
Transmute the input into a new format.
Input and output are roughly the same size and/or meaning.
Generate a large amount of text from a small set of instructions or data.
Input is smaller than the output.
Take a large amount of text and produce a smaller output.
Transmute the input into another format. Input and output are roughly the same size and/or meaning.
Generate a large amount of text from a small set of instructions or data.
What are the characteristics of your task?
There are a TON of different Models and they vary substantially.
How do evaluate the quality of the response?
Knowledge, facts, concepts, and information that is “embedded” in the model and must be “activated” by correct prompting.
Latent content only originates from Training Data.
Examples:
The abilities of Large Language Models (LLMs) that arise as a result of their extensive training on vast datasets, even if those capabilities were not explicitly “taught” or “programmed” into the model.
Unintended Abilities: Emergent capabilities might lead the model to perform tasks it wasn’t explicitly trained for. For example, an LLM might be trained on general text but can still translate between languages, write in different styles, or even generate code.
Beyond Training Data: While LLMs are trained on specific data, the patterns they recognize can be generalized to a variety of tasks. This generalization can sometimes lead to surprising and useful outputs, as well as unintended or harmful ones.
Complex Interactions: The interactions between the numerous parameters in LLMs can lead to behaviors that aren’t immediately predictable from the training data alone. These behaviors arise from the complex, multi-dimensional relationships the model has learned.
Potential and Pitfalls: Emergent capabilities can be both an asset and a liability. On one hand, they showcase the flexibility and adaptability of LLMs. On the other, they can result in outputs that are unexpected, inaccurate, or even biased.
Example - Metaphor Understanding: While not explicitly trained to understand metaphors, an LLM might still be able to interpret and generate metaphoric language due to its exposure to diverse linguistic constructs in its training data.
Example - Creativity: LLMs can generate creative text, poetry, or stories. While they don’t “understand” creativity in the human sense, their vast training allows them to mimic creative constructs.
Limitations: Despite their emergent capabilities, LLMs have limitations. They don’t truly “understand” concepts in the way humans do, and their outputs are entirely based on patterns in the data they’ve seen.
Instances where the model generates information that is not accurate, not based on its training data, or simply fabricated. Essentially, the model “hallucinates” details that aren’t grounded in reality or the facts it has been trained on.
Origin of the Term: The term “hallucination” is borrowed from human psychology, where it refers to perceiving things that aren’t present in reality. When applied to LLMs, it captures the idea that the model sometimes produces outputs that aren’t reflective of real facts or the content it was trained on.
Causes: Hallucinations can be caused by various factors, including the model misinterpreting the input, over-generalizing from its training data, or trying to complete ambiguous queries with details it “invents” on the spot.
Common in Open-ended Queries: LLMs are more prone to hallucinate when given open-ended prompts or when there’s a lack of specific context. In these situations, the model might fill in gaps with fabricated details.
Concerns: Hallucinations are a concern, especially when users rely on LLMs for accurate information. They highlight the importance of treating outputs from LLMs as suggestions or starting points rather than definitive answers.
Mitigation: Efforts to mitigate hallucinations in LLMs include refining model architectures, improving training data, and employing post-training fine-tuning or validation processes. Additionally, user feedback and iterative development are crucial for identifying and reducing such issues.
Complex job functions are often a series of tasks.
Treat each task as an information problem.
Define each task in terms of Operations.
Prompt Engineering: The creation of the prompt is carefully engineered to provide optimal instructions and context to the LLM. Factors like tone, point of view, and conversational style help steer the LLM’s response.
Model Selection: Huggingface.co currently features 350,247 models and they have a leaderboard for performance!
Evaluate the response(s) against data you trust.
I need to extract the information from this award letter and put it in our database.
I found this cool data report with a bunch of data that I need for my analysis.
Does this contract adhere to our review criteria?
https://www.youtube.com/@samwitteveenai
https://www.youtube.com/@4IR.David.Shapiro
https://www.youtube.com/@mreflow
https://www.futuretools.io
https://snorkel.ai/large-language-models-llms/
https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-work-in-america
https://thehill.com/opinion/congress-blog/3913530-artificial-intelligence-is-not-going-to-take-all-our-jobs/
https://arxiv.org/abs/2303.10130
https://www.analyticsvidhya.com/blog/2023/07/the-fascinating-evolution-of-generative-ai/
https://kyleake.medium.com/data-behind-the-large-language-models-llm-gpt-and-beyond-8b34f508b5de
https://arxiv.org/pdf/2303.08774.pdf
https://www.fiddler.ai/blog/the-missing-link-in-generative-aimonit
https://www.fiddler.ai/blog/the-missing-link-in-generative-ai
https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai
https://www.nvidia.com/en-us/glossary/data-science/large-language-models/
https://blogs.nvidia.com/blog/2023/01/26/what-are-large-language-models-used-for/
https://www.promptengineering.org/what-are-large-language-model-llm-agents/
brobison@uidaho.edu