Science Is a Way of Thinking, Not a Title
Science isn’t a title but a mindset. In the age of AI, titles don’t protect truth—reproducibility does. Anyone who asks, tests, and learns is already a scientist.
In the Czech Republic, a curious thing still happens in conversations about science. Before people ask what you’ve found, they ask who you are. Not your name—your rank. PhDr? RNDr? Doc.? As if ideas needed epaulettes to be taken seriously. That reflex made sense in a world where knowledge lived behind library doors and lab walls. It makes much less sense now, when most of the world’s methods and results sit a search away and the hard part is no longer getting to information but thinking well with it.
We mix up two very different things: science the institution and science the method. The first grants titles, offices, and budgets. The second grants nothing—except the chance to be wrong in public and less wrong tomorrow. A diploma proves you once satisfied an institution’s requirements. It does not prove you’re currently doing science, any more than a driver’s license proves you’re a good driver. It proves you learned to operate the machine safely enough to be allowed on the road. After that, it’s how you drive that matters.
People who like the title-first world are not fools. Their arguments sound reasonable precisely because they’re half-true. Take the most common one: credentials signal expertise. Of course they do—as a prior, a rough guess before you see the work. The mistake is stopping there. In science, priors exist to be updated. The only thing that should outweigh a title is evidence you can check: data, code, procedures, and replications that don’t depend on trusting you. Titles protect status; artifacts protect truth.
A second objection arrives quickly: you can’t “learn a field in minutes” by asking an AI. And you can’t. No serious person believes mastery is instant. What changed is not the amount of learning but the latency of feedback. Literature triage that used to take months takes days. Exploratory models that took weeks now compile in hours. You still need judgment, but now judgment runs more cycles. Depth used to mean ten years in a hallway lined with the same offices; now it can mean a thousand ask-test-learn loops that fit inside a year. Depth is loops, not birthdays.
Then there’s the fear that if you remove the gate, charlatans will rush in. It’s a legitimate fear with a bad premise: that gatekeeping works. Look around. The irreproducibility crisis was not led by garage tinkerers; it was led by respected labs, prestigious journals, and p-values that couldn’t be reproduced on Tuesday. Charlatans are a failure of process design, not of outsiderness. The fix is unglamorous and powerful: require preregistration when it’s appropriate; publish data and code; assign independent replication partners; separate “exploratory notes” from “claims you’d bet your career on.” Make it cheaper to be right than to look right.
Someone will object that reading is not doing. They’re correct—especially in domains where tacit craft matters: cell culture, fieldwork, patient care. You don’t learn a clean pipetting motion from a paper any more than you learn rhythm from sheet music. But the barrier here isn’t the lack of titles; it’s the lack of practice. And practice is changing form. Community labs, maker bio spaces, simulation environments, remote rigs, and mentoring collectives make apprenticeship portable. In computational sciences the gap has already collapsed: a curious teenager with Python, a GPU, and a stubborn streak can extend a line of work. We shouldn’t pretend wet-lab skill can be downloaded, but we also shouldn’t pretend the only road in runs through a registrar’s office.
A cousin of that argument says statistics and causal inference are too easy to misuse; let amateurs loose and the noise will swamp the signal. As if misuse were something institutions had solved. Everyone has p-hacked, even if only in their heads. The remedy isn’t pedigree, it’s scaffolding: directed acyclic graphs to make causal assumptions explicit; power analyses that precommit to detectable effects; robustness checks; simulation-based calibration and sensitivity analyses; review checklists that treat “negative” results as first-class citizens. The method disciplines us because people—credentialed or not—are prone to see patterns where none exist.
Peer review is the filter, another objection goes. It is a filter—slow, opaque, and leaky. The answer is not to discard it but to upgrade it. Registered reports fix a chunk of publication bias. Open reviews expose reasoning to daylight. Post-publication critique turns “accept” from a finish line into mile one of a relay. The goal isn’t to abolish journals; it’s to replace trust us with show us. If your claim leans on authority, it’s fragile. If it leans on artifacts, it travels.
You’ll hear the apprenticeship version: real understanding takes ten years. Sometimes it does. But we mistake clocks for causes. The cause is exposure to problems plus feedback on attempts. When tools compress that loop, time served stops being a good proxy. It’s like insisting you can’t be a photographer until you’ve mastered a darkroom. That used to be true because a darkroom was the cheapest way to close the loop from shot to print. Digital collapsed that loop. The point was never the chemicals. It was the iterations.
There’s a subtler argument that domain context prevents naive mistakes. True. Context saves you from reinventing known errors. It also blinds you to invisible fences everyone else politely walks around. Outsiders miss some pitfalls; insiders miss some possibilities. We can design around both. Write “pitfall catalogs” for fields the way aviation writes safety manuals. Maintain failure case libraries. Pair newcomers with domain mentors. Require a “known pitfalls consulted” section in papers, with links to the specific failure modes considered and how you guarded against them. The answer to naiveté is not gatekeeping; it’s documentation and pairing.
Ethics and safety are where the temperature rises. Surely we can’t let anyone do anything. Of course not. Oversight and red lines matter. But oversight is a function—review, monitor, halt—not a title. Independent IRBs, community ethics boards, standardized risk matrices, and transparency about dual-use risk can coexist with broad participation. Draw bright lines: you don’t do human or animal studies without approval; you don’t publish dual-use methods without mitigations; you don’t deploy high-risk interventions without monitoring plans. The system keeps people safe, not the letters after their names.
AI itself becomes a target: it hallucinates. Yes. Humans do too—we just call it confidence. The solution in both cases is to force claims to carry their sources with them and to separate summaries from citations you can click. Use retrieval-locked pipelines. Maintain a “citation integrity log.” Ban claim-only outputs in scientific contexts. If an AI can accelerate your literature review, use it; if it can’t show work, don’t.
Another worry: you can’t do science alone; it’s a community practice. Absolutely. The point of demoting titles is not to atomize science; it’s to open the doors. Community emerges through open lab notebooks, issue trackers, preprint feedback threads, code review clubs, Discords and Slack groups where methods are discussed and experiments are debugged at 2 a.m. The right measure of belonging is contribution, not clearance.
A practical objection follows the money: funding, data, and compute require institutions. Increasingly, they don’t—at least not exclusively. There are clouds with research credits, public datasets, model zoos, pooled compute cooperatives, and philanthropic micro-grants that move faster than big agencies can. You still disclose conflicts, provenance, and compute budgets. You still document data licenses and consent. But the gate is no longer padlocked.
There’s also the myth that the outsider hero is mostly a movie trope. That’s healthy skepticism. Most “rebel genius” stories are invented in hindsight. But notice how that cuts both ways: if most projects fail, the fix is not to shrink the set of people allowed to try; it’s to strengthen the filters before strong claims. Label early work as exploratory. Demand independent replications before press releases. Put the emphasis on the process rather than the press. You can welcome a million explorers and still protect the map.
A more philosophical pushback says the hard part of science is choosing the right questions, and AI doesn’t help with judgment. Exactly. AI expands the search space; it doesn’t pick your destination. That’s on you. In fact, the human role intensifies: problem framing, choosing what matters, deciding what would convince you you’re wrong. A simple discipline helps: include an “impact and falsifiability” box with every project—why the question matters, the minimal result that would change a decision, and the observation that would prove you wrong quickly. If you can’t write that box, you’re not ready to run.
If all this sounds like a manifesto to tear down universities, it isn’t. Institutions are useful. They aggregate resources, train people, and create continuity. The argument is narrower and more subversive: stop confusing the outfit with the athlete. A lab coat is not a method. A method is not a myth. We need more places where people can learn to think scientifically, not fewer. We need more apprenticeships, not more incantations. We need less worship of rank and more respect for the work.
So what does it look like—practically—to live by “science is a way of thinking” rather than “science is a title”?
You publish artifacts, not just claims. Data (with clear licenses), code (with environment files), training scripts, seeds, lab protocols. If someone can’t run your work, you haven’t finished.
You precommit where it matters. For confirmatory studies, preregister analyses and endpoints. For exploratory work, label it explicitly so readers know which parts are map and which are compass.
You invite adversaries. Assign a friendly rival to replicate your result. Pay them in authorship, bounties, or both. Red teams catch more than your friends do.
You separate computational from experimental claims. Simulations are not experiments; they’re ideas with better graphics. Say what each does and does not show.
You track accountability at the artifact level. ORCID for identity, DOIs for datasets and models, signed commits for code, audit trails for major changes. Titles don’t create accountability; traceability does.
You report uncertainty as a citizen, not as a lawyer. What did you try that failed? Where might you be wrong? What would change your mind?
None of this is anti-academic. It is anti-mystique. The mystique says the right incantations in Latin and expects the universe to yield. The method says: here is what I did, here is everything you need to do it too, here is where I might be wrong, please check. One approach builds priests; the other builds progress.
There’s a cultural piece here, especially at home. Czechs are allergic to pretension, which is why our title-mania is so odd. We are brilliant at puncturing the pompous in politics, and yet we let academic pomp walk around unchallenged. It would be more consistent—and more useful—to keep our skepticism aimed at arguments, not at people without badges. If someone with no title shows you a replicable method that makes a real problem smaller, the correct national response is “Díky,” not “Kdo jste?”
If your only value as a scientist is remembering what you wrote in your dissertation, you’re an archivist of your younger self. Real scientists stay alive to the world. They start new loops. They create more scientists—by mentoring, by publishing legible work, by refusing to make knowledge look harder than it is. They don’t sit in institutions and police literacy; they turn curiosity into value.
In an age when AI can shrink the distance between a question and a first answer, the important divide is not between “scientists” and “non-scientists.” It’s between people who practice the scientific way of thinking and people who don’t. The former will sometimes lack the expected titles. The latter will sometimes have all of them.
If you can ask a question clearly, design a test, run it, show your work, and change your mind when the world says no—you are doing science. If you can help someone else do the same, you are making scientists. And that’s the only kind of credential the future will care about.