Author: JuanCarlosPS

  • AI, coding assistants and the market place

    AI, coding assistants and the market place

    I am seeing only the surface of the power of coding assistants, but I can already imagine what a Senior Software Engineer can do today with the available AI tools. The most well-known ones right now are GitHub Copilot, Claude Code, ChatGPT, and Amazon Q. As far as I know, only GitHub Copilot can be used on a monthly basis with a limited number of code lines and chats. In general, to use one of these tools, one has to pay at least 10 dollars per month.

    With the help of these coding assistants, the major skill I need is to read fast. Writing code is not strictly necessary anymore. Writing prompts and chats is probably becoming the main task, because the code can be written by the assistant. However, reading quickly and understanding the code is crucial, so that it can be modified manually if necessary (although it would often be sufficient to re-run the AI multiple times so that it finds and fixes its own bugs).

    A Senior Software Engineer, with the help of these coding assistants, is probably at least 20 times faster at writing code. The number 20 is just a personal estimate based on my feeling; I am sure it could be much higher. You can sit there, assign different tasks to your AI assistants, and let them generate the code. The role of the Senior Software Engineer is to double-check that everything works perfectly.

    What is required today is, of course, expertise — but the most important skill is common sense combined with knowledge. This means a deep understanding of the business, the goals, and how technology works. The “real labor” is now largely done by code assistants. There is no doubt that these tools will have a massive impact on businesses and the marketplace.

    The use of AI tools is creating a huge gap between beginners and senior workers. I can already see this impact in different places. In my opinion, it is becoming increasingly clear that everyone will need to adapt to AI. Not everyone will need to understand AI deeply, but at least everyone will need to understand and use AI tools.

    AI Agents

    AI agents are already here. In 2025, I heard about them and thought they might arrive in a year. Well, they are already present. The impact is that everyone needs to become much more effective — and I believe that, in a short time, this will become the new normal. This also means paying for services and for the right hardware. In other words, there will be increased expenses for AI services and more powerful computers to manage larger amounts of data at higher speeds.

    Normalization

    Today, the market is already splitting almost everything into “effective” and “non-effective.” In many areas, “effective” means someone with AI knowledge or someone who can properly use AI tools. “Non-effective” is someone who works the same way as five years ago, or someone who uses AI tools without really understanding what they are doing.

    It will take some time until the use of AI becomes normalized, and then we will have a new baseline. This is the curse of capitalism: there is no final goal for efficiency. The bar is constantly raised. The hunger is endless. This is good news, because it means jobs will continue to exist. At the same time, it is also bad news, because anyone who is unable to adapt to the new marketplace is simply left behind.

    There are many ways to look at reality. When talking about AI, I am using a capitalist lens. Some people know that other realities exist and believe that the presence or absence of AI is completely irrelevant. But this is not the topic here. In our main reality, we need to adapt to AI-driven changes.

    If you are working today, there is no need to panic. Just think about how to learn, step by step, new tools that can make your job more effective.

    Juan Carlos

  • How AI is affecting consulting

    How AI is affecting consulting

    The conversation with ChatGPT

    To demonstrate how AI is affecting consulting, it is a good idea to show what today’s technology can do. It is important to note that this technology is available to everyone and is free. Just imagine the possibilities when even more powerful tools are available for those willing to pay, or what experts can achieve using these tools.

    Let’s start with a very practical example of the power of AI tools. First, this is a free tool available to everybody: ChatGPT. This tool has a voice function where you can practically “talk” to it. There is also the option to choose a voice. I selected the Maple voice because this voice should sound happy and sincere. I started a voice conversation, which means I don’t even need to type. I can directly start a conversation with the AI tool. Some months ago, the voice sounded very robotic. Today, the voice sounds incredibly real. Moreover, it is true that the Maple voice sounds happy and sincere as promised.

    For practical reasons, I asked something easy but common in current digitalization problems.

    No doubt that the English level of the AI tool is almost perfect, in pronunciation and also in grammar. English is the third language I learned (my mother tongue is Spanish, and I can speak German at a native level). It feels good to have a nice voice answering you so politely and professionally.

    First, I told ChatGPT that I am a manager working in a laboratory, and ChatGPT should be the experienced consultant. ChatGPT understood the task perfectly:

    Then, after some questions, I wanted to know the general steps that would lead me to a solution before knowing much about the details.

    And I wanted to know more about step one, so I asked ChatGPT for the substeps of step 1 (identify the key processes that need improvement):

    What I think about this conversation

    Some years ago, only people with experience in the field could have clarity about the steps needed to help a customer (in this case, a laboratory manager) to select a new software. Today, ChatGPT knows this very well. As always, the difficulty lies in the details, and that is where the real work happens. What is the impact of this on consultancy? I think the change will be more on the customer’s side, because anyone can now prepare an excellent presentation with the help of ChatGPT, without being an expert. That means the customer will need to recognize the good teams, and from the good teams, of course, select the best one. These are the changes happening now and leading to further changes.

    • First change: Customers need to be smarter in identifying the best consulting team, since today “knowledge” has become cheap. There are clear steps customers can take. This is not the topic of this post, but you can write me a message on LinkedIn to talk about it.
    • Second change: Once customers can recognize the best consulting team, consulting companies will need to readjust their strategy and become more strict in selecting their consultants. The set of skills needed today is different from what was needed, for example, 10 years ago.
    • Third change: The industry will shift from “dead knowledge” to “living knowledge.” I just invented these two terms to distinguish between knowledge any machine can provide (“dead knowledge”) and knowledge that only a creative expert can offer (“living knowledge”). The expert does not necessarily need to be a specialist in a field, but he or she must have a set of skills and knowledge that make them capable of delivering consulting services that truly help customers.

    This all sounds very cryptic

    Yes, because it would take many articles to really explain these different points. But the most important message here is: information has become cheap. And maybe it is better to use the word information if we want the word knowledge to maintain its important status. Maybe this is a much better distinction. Information has become so cheap that any machine can tell you anything. But knowledge—and only a knowledgeable person—can determine if this information is correct, if it can be applied in a specific case, and, most importantly, what to do when the standard way doesn’t work. That is the definition of an expert: somebody who knows what to do when things don’t go as expected.

    Summary

    Information is cheap today. Consultancy can no longer rely on information alone. For this reason, customers need to learn to recognize good consulting teams—in other words, to distinguish teams that only have information from those that have both information and knowledge.

    The three main characteristics of the consultant of the future:

    • An expert consultant can recognize which information is useful.
    • An expert consultant knows which information to apply, in which situation, and at what time.
    • An expert consultant knows what to do when things do not go as expected.

    An AI tool will never be able to do that because, despite all the information it possesses, it doesn’t have the intuition, creativity, knowledge, experience, or common sense to apply information in the correct way. The real work lies in the details. A lot of money and time are lost because paying attention to details is difficult, and only experts can recognize them.

    Juan Carlos

  • How AI is affecting programming

    How AI is affecting programming

    The Basics

    To understand how AI is affecting programming, a short introduction to programming is necessary. Programming is the process of creating software, which involves multiple tasks such as problem analysis, requirements gathering, system design, coding, testing, debugging, integration, deployment, and maintenance.

    In software development, the Software Development Lifecycle (SDLC) is essential because it provides a structured framework to guide and manage these tasks. While there are several SDLC models—today, the Agile model is among the most widely known and applied—the fundamental steps remain closely aligned with the tasks mentioned above.

    • Requirement Analysis
    • Planning
    • Design
    • Implementation (Coding)
    • Testing
    • Deployment
    • Maintenance & Updates

    Skills needed in SDLC

    I asked ChatGPT to create a table with the skills needed in the different steps of the SDLC, and which can be replaced by AI. This is the result:

    SDLC PhaseProfessional RolesKey Skills RequiredAI Replacement Potential
    Requirement AnalysisBusiness Analyst, Product Owner, ConsultantCommunication, domain knowledge (finance, pharma, etc.), stakeholder management, requirements documentation❌ Low – AI can summarize but human interaction & negotiation are irreplaceable.
    PlanningProject Manager, Scrum MasterProject management (Agile, Scrum, PRINCE2), risk management, resource allocation, leadership⚠️ Medium – AI tools can suggest timelines/resources, but leadership & decision-making remain human tasks.
    DesignSolution Architect, UX/UI DesignerSystem architecture, database design, UI/UX principles, creativity, design thinking⚠️ Medium – AI can propose architectures & design mockups, but creativity & domain fit need humans.
    Implementation (Coding)Software Developer, Data Engineer, Mobile App DevProgramming languages (Java, Python, C++), debugging, clean coding, DevOps basics✅ High – AI copilots can already write/debug code, but deep architecture and complex problem-solving still need humans.
    TestingQA Engineer, Test Automation SpecialistManual/automated testing, test case design, bug tracking, performance/security testing✅ High – AI is very good at test automation & bug detection, but exploratory/manual testing still valuable.
    DeploymentDevOps Engineer, Cloud EngineerCI/CD pipelines, containerization (Docker, Kubernetes), cloud platforms (AWS, Azure, GCP)⚠️ Medium – AI can optimize deployments, auto-scale infrastructure, but strategic oversight remains human.
    Maintenance & UpdatesSupport Engineer, Site Reliability Engineer (SRE)Monitoring, troubleshooting, patch management, security updates⚠️ Medium/High – AI can auto-detect issues and even self-heal systems, but complex incident handling still human-driven.

    This may reflect the current status of AI’s influence on the SDLC. For me, the most important point is the business impact of AI. I have already heard from several programmers that the market no longer looks as promising as it once did. A few years ago, companies were actively hunting for programmers and offering excellent conditions. Today, however, the hype seems to have faded. A shift in the programmer job market has already occurred—not because programming is no longer necessary, but because fewer programmers can now accomplish much more with the help of AI.

    As a result, the expected level of expertise for entry-level programmers will rise. This may be good for companies and the overall market, but it represents a real challenge for new graduates entering the field.

    And what about the other steps of the SDLC? The same trend will apply. There is no reason why professionals should not become more effective with AI. For example, a requirements engineer can gather client inputs, let AI generate summaries, highlight key questions, and then review and refine the results. A skilled engineer can validate these answers, make corrections if necessary, and finalize the work. As long as AI is not perfect—which may take a long time—experts will remain essential. However, the amount of repetitive work is already being reduced dramatically.

    Market Behavior

    • In February 2025, the company Ocado has cut 500 technology and finance jobs because AI is reducing costs.
    • In July 2025, Scale AI layed 200 employees, because they ramped up their GenAI capacity too quickly.
    • In July 2025, Microsoft cut about 4% of jobs amid hefty AI bets.
    • In September 2025, Fiverr layed off 250 workers in AI refocus effort.
    • In September 2025, Software is laying off 20% of the employees in shift to bold AI bests.

    And here a large list of tech layoffs in 2025 can be found.

    Discussion

    What we are witnessing now is a shift in the market. This is not an apocalyptic scenario, but rather a transition in the demand for specific skills. Some skills are becoming less valuable or even obsolete, while others are increasingly sought after. Roles that have become less productive due to AI are particularly vulnerable, and many companies are already planning to reduce them.

    On the other hand, there is a growing demand for AI/ML engineers, DevOps specialists, data annotation and review professionals, as well as experts in automation and AI system design. This transformation presents opportunities, but also risks.

    In theory, such a shift might not need to result in job losses if workers successfully adapt and reskill. However, since AI is promising significant gains in efficiency, companies focused on boosting revenue may be faster to cut jobs than workers can adjust to the changes.

    Juan Carlos

    Similar posts:

  • How AI Is Affecting Science?

    How AI Is Affecting Science?

    The scientific method consists of the following steps: observation, problem formulation, prior research, hypothesis formulation, experimentation, analysis of results, conclusion, communication, repetition, and review. Today, AI has a major influence on all those steps related to the generation and processing of data. These influences can be both positive and negative.

    Positive Influences

    • Analysis of large volumes of data: What used to be a manual and tedious task can now be performed efficiently and quickly by AI.
    • Simulation: AI can not only help reduce the number of experiments required, but in the future, it could eliminate the need for some experiments in certain cases.
    • Collaboration: Collaboration among scientists will be accelerated thanks to various AI tools.
    • Discovery of hidden patterns: Previously generated data could be used to uncover hidden patterns through AI.
    • Automation: Both in data processing and experimentation, automation is one of the areas with the greatest potential for AI.
    • Data publication: What used to be a task for scientists can now be performed by AI, provided the data is available in the proper format.

    Negative Influences

    • Critical thinking: Constant use of AI could dull scientists’ way of thinking. Critical thinking is the foundation of science, and overreliance on AI risks abandoning this type of reasoning.
    • Dependence: When discussing AI, it generally refers to using pre-existing models. Scientists who want to gain deeper understanding need to be involved in developing AI models; otherwise, they merely become users of a program.
    • Data biases: It is crucial to understand how data is formatted, as this is as important as the AI model being used. Incomplete or biased data can lead to false conclusions.
    • Ethics and intellectual property: AI can hallucinate; therefore, questions arise: Who is responsible for data generated by AI—the scientist, the model creator, or the scientist’s employer? To whom do discoveries from scientific research belong—the scientist, the employer, the company owning the AI model, or all of them?
    • Inequality of access: Science is already unequal, depending heavily on budgets and the hypotheses to be tested. AI may increase this gap, as only those with the financial resources and knowledge to use AI can accelerate their research.

    Critical Perspective

    These are some of AI’s influences on science. Since science is very broad and spans different industries, the most important aspect is the critical thinking a scientist must maintain when using AI.

    • What type of model should I use?
    • How should my data be formatted to be processed by AI?
    • How should I verify the results?
    • Which experiments are no longer necessary thanks to improved experimental design enabled by AI?

    Juan Carlos

  • How AI is affecting writting

    How AI is affecting writting

    The first thought many people would have is this: “If artificial intelligence today can create an article in milliseconds, then the answer is very simple: writers will disappear…”

    There are many ways to approach this topic. One possible way is to write from an economic perspective. Another is to write from a philosophical one. While the economic perspective tends to attract more attention (“The cost of hiring a writer vs. the cost of using Artificial Intelligence”), the philosophical perspective is deeper (“The true importance of writers”) and also helps us better understand the economy. That’s why I will begin from the philosophical point of view.

    In order to feel, humans want to read from other humans.

    Fortunately, the claim at the beginning of the article is completely wrong. I am convinced that there will never be a moment in human history, no matter how advanced it becomes, in which writers will be rendered superfluous. To begin with, every human being, as a sentient being, can only truly feel something when reading something written by another sentient being. One might object that if AI can imitate humans so well, then it could also write texts that move people’s emotions. It’s a good argument, but as soon as someone knows that a text was written by AI, they will no longer want to read it.

    Here is a simple argument to support this. Although humans differ depending on their cultures, languages, and ideologies, let us take the topic of love —the most cliché topic that exists— as an example. Every human being knows what love is, or at least has a notion of it. Someone with a broken heart will not want to read a love story written by AI —no matter how well written it is— but rather a real love story written by a real person. That is the essence of being human: to feel.

    Writers are critical, and criticism is necessary for society.

    Leaving love aside, there is another very strong argument that defends the importance of writers. No matter how advanced AI becomes, it is very unlikely that it will ever be 100% critical. AI is trained to fulfill a specific function. Being critical is something innate to human beings, and that capacity arises from the fusion of intellectual abilities with emotions and feelings. Humanity needs writers because true writers are always pointing out the big and small issues of humanity, posing questions that no one dares to ask, or searching for answers to those questions.

    I could write about more arguments, but for me these two are more than enough. Human beings need to read texts written by other humans because it is part of what it means to feel. And being critical of the world and of all possible topics is an intrinsic quality of writers. Therefore, AI will never be able to replace writers. Now I can move on to the economic perspective.

    The tasks which will be probably will be done by AI

    There are many jobs and tasks that involve writing. If this type of writing does not need to move emotions or feelings, and does not require analyzing a topic in great complexity, then the probability that AI will replace these jobs and perform these tasks with the same effectiveness —or even greater— is very high. Here are some examples of tasks that do not necessarily need to be done by humans: text summaries, data reports, text translations, etc. AI will probably be able to perform these tasks very soon and in milliseconds.

    The following articles will deal with consulting, science, and programming, to simply connect these fields with writing. Companies will no longer have to pay thousands of dollars for reports, scientists will no longer have to spend days or weeks writing academic papers —although the discussion section will probably always be something they must do themselves—, and programmers will not have to write documentation for their code. As long as AI has access to the basic data —data that should be collected properly throughout projects— it will be able to generate reports, papers, and documentation in a matter of seconds.

    Juan Carlos

    Other related articles:

  • The Stock Market Is Speaking: Biotech, Pharma And AI Right Now

    The Stock Market Is Speaking: Biotech, Pharma And AI Right Now

    These days, the stock market is reflecting the anxiety within the biotech and pharmaceutical industries as they navigate a complex landscape of shifting priorities and economic uncertainty. On one hand, the potential return of Trump-era tariffs (see article in Financial Times “Trump demands drug companies lower prices before end of September”) is creating an atmosphere of unpredictability, forcing companies to reassess costs and global supply chains. On the other hand, artificial intelligence (AI) remains the dominant narrative across industries. And that’s exactly where the money is going right now.

    AI Is a Powerful Tool – But Still Just a Tool

    There’s no doubt that AI offers massive potential, and its applications will shape industries for years to come. But it’s also clear that AI is experiencing a moment of hyper-hype. At the end of the day, markets are driven by user demand, not buzzwords. People don’t use products simply because they involve AI—they use them because they solve a problem, offer value, or improve an experience. If AI contributes to that value, great—but it’s not the main selling point.

    In biotech and pharma, AI will become a powerful tool among many. These companies will continue doing what they’ve always done: developing medicines, biologics, and therapies that improve human health. While capital is currently flowing into AI-related ventures, it’s worth remembering that, ultimately, the companies addressing real human needs—such as curing disease or extending healthy lifespan—will stand the test of time.

    Biotech and pharma are still deeply engaged in essential research areas: cancer treatment, genetic therapy, drug development, regenerative medicine, and longevity science. Once the AI hype cools—and hopefully not triggered again by a crisis like the COVID-19 pandemic—public attention and investment may return to the enduring and vital mission of health and life sciences. When that happens, the sector’s stocks are likely to bounce back.

    Adaptation of AI by the Biotech and Pharma companies

    AI in the focus of companies. Company names are here not real.

    While the buzz around AI often focuses on tech giants, biotech and pharma companies have not been left behind. Many of them are already integrating AI into key areas of their operations—quietly, strategically, and with long-term impact in mind. Unlike consumer tech, where AI applications can be rapidly deployed and iterated, the life sciences require a more deliberate approach due to strict regulatory environments, data privacy concerns, and the complexity of biological systems.

    Targeted Use Cases, Real Value

    The most common and impactful applications of AI in the life sciences today include:

    • Drug Discovery: AI is helping scientists sift through vast molecular libraries to identify promising drug candidates faster and more accurately than traditional methods.
    • Clinical Trials: AI can optimize trial design, improve patient recruitment, and identify risks early by analyzing real-world data and patient records.
    • Precision Medicine: AI supports the development of personalized therapies by analyzing genetic data and predicting patient-specific responses to treatment.
    • Manufacturing Optimization: Predictive maintenance, process automation, and quality control can be enhanced through AI-driven analytics, leading to cost savings and improved reliability.
    • Regulatory Compliance & Documentation: Natural language processing tools are being used to automate documentation processes and ensure regulatory adherence with less manual effort.

    These are not speculative future applications—they’re happening now. Companies like Novartis, Roche, Pfizer, and BioNTech have already formed partnerships with AI startups or built in-house teams dedicated to machine learning and data science. The transformation is underway, but it’s not flashy. It’s methodical, rooted in science, and focused on outcomes that matter.

    Conclusion

    While AI continues to dominate headlines and drive investor excitement, it’s essential to remember that it is a means to an end—not the end itself. In biotech and pharma, the ultimate goal remains unchanged: improving human health and saving lives. AI will play a transformative role in this mission, but it will never replace the deep scientific knowledge, regulatory understanding, and ethical responsibility that define the industry.

    Juan Carlos

    Similar articles:

  • The pharmaceutical and biotechnological are gradually embracing cloud technology

    The pharmaceutical and biotechnological are gradually embracing cloud technology

    Without cloud technology, today’s world would be almost impossible to imagine. From the countless applications on our smartphones to the vast majority of online services, cloud computing is the foundation of modern digital life.

    However, despite its widespread adoption across many industries, there are still ongoing discussions and hesitations regarding the use of cloud technology in pharmaceutical and biotech companies. These concerns arise mainly due to strict regulatory requirements, data sensitivity, and the critical nature of operations in these sectors.

    Naturally, cloud providers are making significant efforts to address these concerns, not only to improve security and compliance but also to expand their market share in highly regulated industries.

    Below is a table summarizing some key concerns of pharma and biotech companies, along with the measures cloud providers are implementing to address them:

    Industry ConcernCloud Provider Solution
    Regulatory Compliance (GxP, FDA, EU GMP)– Pre-built compliance frameworks (e.g., AWS GxP Compliance Package, Azure for Life Sciences, Google Cloud GxP Guidelines) – Templates, SOPs, and documentation aligned to 21 CFR Part 11, Annex 11, HIPAA, GDPR – Support for Computer System Validation (CSV) to ensure qualified environments for regulated workloads.
    Data Security & Privacy– End-to-end encryption by default (data at rest and in transit) – Bring Your Own Key (BYOK) and Hardware Security Modules (HSMs) for customer-controlled encryption keys – Dedicated regions (e.g., AWS GovCloud, Azure Confidential Cloud) for sensitive data – Full compliance with GDPR, HIPAA, PIPL, and other data protection laws.
    Data Sovereignty & Local RegulationsRegional availability zones for customer choice over data location – Support for data residency requirements in specific countries (e.g., EU, China) – Partnerships with in-country providers to meet strict sovereignty demands.
    System Validation Costs– Automated tools to simplify validation: e.g., AWS Well-Architected for GxP, Microsoft GxP Compliance Blueprints – Shared responsibility: provider handles infrastructure validation, customer validates app layer – Pre-certified building blocks reduce qualification time and cost.
    Vendor Lock-In Risks– Multi-cloud and hybrid solutions supported (e.g., Google Anthos, Azure Arc) – Open standards, containerized applications, and APIs enable portability – Many life sciences firms now design architectures for multi-cloud resilience.
    Data Breaches or Insider Threats– Robust identity and access management (IAM) with multi-factor authentication (MFA)Role-Based Access Control (RBAC) for strict user permissions – Continuous monitoring, threat detection, and automated security response tools (e.g., AWS GuardDuty, Azure Sentinel, Google Security Command Center).
    Operational Disruption During Migration– Dedicated cloud migration teams for life sciences (e.g., AWS Migration Services, Microsoft FastTrack) – Hybrid options to phase adoption (on-premises + cloud) – Proof-of-concept programs to test cloud systems before full production rollout.
    Loss of Direct Infrastructure ControlShared Responsibility Model clarifies boundaries: provider secures hardware/infrastructure, customer controls data and access – Full visibility through real-time dashboards, audit logs, and compliance reports – Managed services to reduce operational burden while maintaining oversight.
    Cultural/Skill Gaps in Cloud Adoption– Extensive training programs (e.g., AWS Life Sciences Learning Paths, Azure for Healthcare Training, Google Cloud Skills Boost) – Partner networks to support technical upskilling and change management – Customer success teams to guide regulated customers through adoption steps.
    Hybrid Complexity (Legacy & Cloud Mix)– Seamless hybrid offerings: e.g., AWS Outposts, Azure Stack, Google Distributed Cloud – Integration tools for legacy labs, manufacturing systems, and cloud platforms – Encouragement of phased, modular migrations to reduce complexity.
    Source: ChatGPT (final prompt: “make a table how the cloud enterprises are solving the concerns”)

    Pharma and Biotech companies are daring to embrace change

    Many pharmaceutical and biotech companies have already launched major cloud initiatives. Industry leaders such as AstraZeneca, Sanofi, Takeda, Pfizer, Roche, Merck (MSD), Novartis, GSK, Bayer, and Johnson & Johnson have started integrating cloud technology into their operations. This shift is logical, given the many advantages cloud solutions offer compared to traditional on-premise systems.

    What is an On-Premise Solution?
    An on-premise solution means that a company owns and operates its own servers, typically located in a specific building or data center managed by the organization. While this approach grants full control and ownership over the infrastructure, it also comes with full responsibility — including procurement, installation, maintenance, backups, updates, and security.

    On-premise solutions are expensive, complex to maintain, and often lack the scalability and efficiency of cloud environments. In contrast, cloud providers operate vast networks of servers distributed globally, offering flexible, scalable, and often more cost-effective infrastructure.

    Today, there are fewer reasons for pharma and biotech companies to resist transitioning to the cloud. However, one critical aspect that remains is the need for highly skilled teams who understand and manage cloud security, particularly under the concept of Shared Responsibility.

    Shared Responsibility Model

    This model clearly defines which security tasks are handled by the cloud provider and which remain the responsibility of the customer. Misunderstandings in this area are one of the most common causes of security breaches in cloud environments. Cloud providers may secure the infrastructure itself to the highest standard, but user-side vulnerabilities — such as weak passwords or poor access management — can still expose sensitive data.

    For pharmaceutical and biotech companies, data is one of their most valuable assets. Years of research generate critical intellectual property, making data protection paramount — whether hosted on-premise or in the cloud. Only companies with vast financial resources may attempt to replicate the scale, redundancy, and security capabilities of major cloud providers through in-house, on-premise solutions. Even then, it raises the question of whether this approach is more secure, efficient, or cost-effective than cloud alternatives.

    In summary, while cloud technology offers undeniable advantages, success depends on:

    ✅ Choosing the right cloud provider
    ✅ Understanding and respecting the shared responsibility model
    ✅ Either building strong internal expertise or trusting specialized external partners to manage the customer-side security effectively

    The companies that strategically embrace cloud solutions while maintaining strong internal security practices will be best positioned to leverage the benefits of digital transformation.

    Sources

    • Pixabay
    • ChatGPT and prompting

    Juan Carlos

  • Powerful and surprising lessons from the first marathon of my life

    Powerful and surprising lessons from the first marathon of my life

    Before the Marathon

    It is Sunday, 5 a.m. I am already fully awake after only five hours of sleep. The nervousness has given me too much energy—not the kind of energy an athlete wants to have just before a marathon. The emotional stress may be unjustified, but it is very real. In just a couple of hours, I will start my first marathon (42 km), and I’m asking myself a lot of questions.

    Not only about the past (Did I train enough? Did I rest enough before the marathon?), and the present (How much should I eat now? How many clothes should I take with me on this very rainy day?), but also about the near future (Will I make it? What should I do if I can’t run anymore?). I trained a lot over the last two months—not a long time to prepare for a marathon—but I’ve never stopped doing sports in my life, and I knew that with proper training and discipline, I could be fit enough to finish the marathon after two months of preparation.

    The Marathon

    The marathon started exactly at 7 a.m. I didn’t have any breakfast because I didn’t want to start the marathon while still digesting food. During training, I learned that I should eat about two to three hours before a long run. That would have meant breakfast no later than 5 a.m.

    “It’s fine,” I told myself. “I can run the first 20 km without food—my body is capable of that. But after that, I’ll need some fruit. Worst case, I’ll use the chemical pack that many runners love: the famous energy gels.” The marathon organizers are excellent, so I knew exactly where the food and water stations were located. I had confidence in my training—and in both my mental and physical condition.

    “Let’s run.”

    After a little more than four hours, I finished the marathon. A good time, considering only two months of training. I shared with my closest friends exactly what I did to achieve this, because I knew that two months might not be enough to properly prepare the body to run 42 km. So, I had to test alternative ways to help my body recover faster between training sessions. And it worked!

    Before, during, and after the run, I learned many things—lessons that cannot be learned by just reading books or having conversations. There is no replacement for real experience.

    So here are, for me, the most powerful and surprising lessons from the marathon:

    Lesson One: Seek advice only from people with real experience

    Having an opinion is the easiest thing on this planet. Everybody has one. On the other hand, having an opinion backed by facts and knowledge is better—but still not enough.

    Today, anyone can write a book about running a marathon or give a presentation about training, without ever having run one themselves. But if you want to run a marathon, you don’t need advice from someone who has never done it. On the contrary. Run away from them. You need to focus on your training and yourself. Therefore, eliminate any kind of unnecessary information.

    Find runners with experience, ask for their advice, and listen carefully.

    Lesson Two: Adapt the advice from experienced people to your situation

    Once you start hearing the stories and advice from experienced runners, you’ll need to process that information and adapt it to your own situation. It depends on who you are and how much energy and time you want to invest in training.

    Some runners might advise you to train for a full year. Others will support the idea of preparing in just two months. You shouldn’t let any kind of information discourage you. Likewise, you shouldn’t get overly enthusiastic about anything that just reinforces your assumptions. I knew a couple of people who had run already marathons. They were very skeptical about my training period. But I also met another runner who believed in it, and most importantly, in me.

    Plan carefully what you can realistically do with the advice you receive, and adapt it to your personal goals—once you’ve made the decision to run the marathon.

    Lesson Three: Find real support

    Unlike the first lesson, where you need to seek advice from experienced runners, here you don’t necessarily need support from people with experience. What you need is support to help you overcome doubts, stay consistent with your training, and simply have someone to talk to about your progress, questions, timelines, nutrition, and more.

    You need people who know that you’re training for a marathon, respect your schedule, and support your commitment. They’ll encourage you in ways that align with your goals. For example, instead of inviting you to a party on Friday night, they’ll understand your situation and suggest an early lunch instead.

    Of course, it would be ideal if these people were runners themselves—but it’s not required. What matters most is having people around you who respect your journey.

    Sharing the experience with others is one of the best ways to increase joy and reduce the pain along the way.

    Lesson Four: Take care of both your mental and physical state

    A marathon is not only about the body. The mental state plays a key role.

    It’s not a coincidence that almost everyone running a marathon looks extremely positive. Some runners smile throughout the whole race—despite the pain—because they genuinely enjoy it. They enjoy the challenge, the movement, the effort.

    Enjoyment is one of the best protections against failure—because when you enjoy something, you’re more likely to keep trying, and less likely to give up.

    The mental state is crucial because, during a marathon, the mind has to overcome the body. This can also be dangerous if the body is pushed too far—leading to injuries. That’s why training is essential—it helps you learn your limits.

    But mental strength isn’t only important during the run. It’s just as important during the training: to find the discipline to train despite bad weather, fatigue, or doubt.

    You’re not just training your body for a marathon—you’re training your mind.

    Juan Carlos Penafiel Suarez

  • AI Is Now Writing Code at Microsoft and Google — What Comes Next?

    AI Is Now Writing Code at Microsoft and Google — What Comes Next?

    One of the latest developments is that AI is already writing 30% of the code at Microsoft and Google (see news). This should, in theory, correlate with fewer hires of software developers at these companies. However, that is not the case.

    Why? Because AI is also creating new jobs and increasing the demand for expertise. Until companies are forced to reduce hiring or start laying off workers, their growth will continue. Even though AI has already taken over some important tasks, we are still in a growth phase where every person counts, and there’s an increasing need for experts with AI knowledge.

    It seems we are in the midst of an intense race: companies are seeking more skilled workers, either through hiring or training. Eventually, some companies will fall behind because they are not competitive enough. Later, companies will also begin to determine how many workers are no longer needed.

    Both scenarios will unfold in the future. But that doesn’t mean we are heading toward an apocalyptic scenario where AI takes over all jobs and people are left with nothing to do. The more likely outcome is that this transitional race will take several years, and in the short term, it will create more jobs.

    Of course, there are already examples in certain areas where AI is actively eliminating jobs. This brings up two different topics: specialized jobs (which I referred to earlier) and simpler, routine jobs. In the case of these simpler roles, the discussion becomes much more complex.

    Here are some recent reports on how AI is affecting the type of simplier jobs:

    Juan Carlos

  • A running shoe doesn’t make the runner

    A running shoe doesn’t make the runner

    There is a tribe in Mexico called the Rarámuri. The Rarámuri people are renowned for their incredible long-distance running abilities. They don’t train in the conventional sense, nor do they follow a special diet, nor do they use specialized running shoes. Typically, they run in traditional sandals called huaraches. Despite this, they perform at the level of the best professional runners in the world. The reason for this likely lies in their lifestyle, which essentially functions as daily training. Additionally, their diet is probably one of the healthiest in the world, free from the need for supplements or the complex nutritional strategies that many modern athletes depend on.

    The Rarámuri woman who said no to Nike shoes

    Lorena Ramírez is a Rarámuri woman who won the Ultramaratón of the Canyons in 2017, covering a distance of 50 kilometers in her native Copper Canyons. Like many Rarámuri women, she wore her traditional skirt and sandals during the race. At one point, Nike sent her running shoes, and Lorena was asked if she would use them. Her response was no. The reason was astonishing:

    Running shoes don’t inherently make someone a better runner

    I cannot stop thinking about this sentence and how many things work in a similar way. By “that,” I mean the idea that if someone is wearing expensive and high-quality running shoes, they must be a runner. Lorena Ramírez’s response holds a lot of depth to explore. The Rarámuri are natural runners. They have been running their entire lives, and their bodies are adapted to run long distances without tiring easily. Yet, many products on the market overlook this fact, promoting shoes as if they are essential for running. In some cases, the risk of injury is high, and while shoes might help someone run faster, they are not a substitute for proper running technique or conditioning. In today’s fast-paced world, people often prefer to rely on products rather than take the time to truly understand the art and science of running.

    Many professional athletes have won competitions without relying on special shoes: Abebe Bikila from Ethiopia won the 1960 Rome Olympics Marathon barefoot, Zola Budd from South Africa set records while running barefoot in the 1980s, and the Tarahumara (Rarámuri) runners from Mexico have consistently excelled in ultramarathons wearing their traditional sandals. This fact has raised important questions about what is truly beneficial for running footwear and how humans are meant to run. Ultimately, only someone with extensive running experience can determine whether a running shoe is helpful or, on the contrary, an impediment.

    The parallel in the business world

    There is a parallel here to the business world. Just as a running shoe can be a tool to improve performance, there are countless tools in the market designed to optimize processes. However, the market is saturated with tools that promise solutions to every imaginable problem. This approach is risky, as only a group of individuals with deep experience in the processes can truly assess whether a tool will be effective (e.g. the use of AI tools). Attempting to solve a poorly understood problem with an untested or unsuitable tool is a recipe for disaster.

    The Rarámuri are natural runners, and their years of experience help them understand their bodies and how to perform at their peak. Many factors come into play: pacing, breathing rhythm, distance, terrain type, weather conditions, nutrition, hydration, mental focus, and recovery. Only after a runner has spent significant time mastering these elements can they truly determine whether a tool—like specialized shoes—would help, and in what specific situations it might make a difference.

    The video of Lorena Ramírez Hernandez talking about the running shoes:

    Juan Carlos