PhD as a mother

As a mother currently pursuing my doctorate, I often encounter the belief that higher education is not the ideal time for parenthood. In this post, I want to share my personal experience, offering a different perspective.

A year ago, I began my doctorate with a two-and-a-half-month-old baby. When I received the acceptance email from Oxford, I was thrilled – a dream come true. However, this raised a question: could I pursue this dream while pregnant? I believed in balancing motherhood and academic aspirations, and my advisor’s encouragement reinforced this belief. We, as a family, moved from Israel to England, adjusting to this new chapter.

It hasn’t been easy. Physically, post-pregnancy recovery and sleepless nights were tough. Emotionally, I constantly struggle with guilt over balancing academic and maternal responsibilities. If I focus on my daughter, I worry about neglecting my research; if I concentrate on my studies, I feel like a bad mother. The logistics of managing a household, especially when being the primary caregiver, added another layer of complexity. Motherhood often feels isolating, as not everyone around me can relate to my situation.

Yet, doctoral studies offered unexpected advantages. The flexibility allows me to align my work with my daughter’s schedule, often during nights or weekends. This means I can compensate for lost time without impacting others, unlike in a regular job. Interestingly, this flexibility leads to more time spent with my daughter than if I had a typical job. Moreover, the challenges of motherhood put academic obstacles into perspective. The best part of my day is always the hug from my daughter after a day of work.

As I keep moving forward with my PhD, here are some key tips that have helped me so far:

  1. Flexible Scheduling: Organize daily tasks, including household chores, within specific hours to enhance efficiency.
  2. Creating a Supportive Environment: Having a support system, be it your partner or friends, is crucial. Address practical issues early on, like daycare and babysitters, and don’t be shy to ask for help.
  3. Aligning Expectations with Your Supervisor: Communicate your limitations early to avoid misunderstandings.
  4. Practice Compassion: Acknowledge that you can’t do everything and be kind to yourself.

In the race of life, there never seems to be a “right” time for children. Whether it’s career progression or personal aspirations, the timing is always challenging. However, if you feel ready, that is the right time for you.

OPIGmas, 2023

Our annual, end-of-Michaelmas OPIG celebrations took place this at the start of December in the MCR (Middle Common Room) at Lady Margaret Hall.

OPIGmas is a much-anticipated combination of pot luck, Secret Santa, and party games.

Perhaps Jay’s megaphone topped the list of gag gifts…

Continue reading

On National AI strategies


Recently, I have become quite interested in how countries have been shaping their national AI strategies or frameworks. Since the launch of ChatGPT, several concerns have been raised about AI safety and how such groundbreaking AI technologies could augment or adversely affect our daily lives. To address the public’s concerns and set standards and practices for AI development, some countries have recently released their national AI frameworks. As a budding academic researcher in this space who is keen to make AI more useful for medicine and healthcare, there are two key aspects from the few frameworks I have looked at (specifically the US, UK and Singapore) that are of interest to me, namely, the multi-stakeholder approach and focus on AI education which I will delve further into in this post.

Continue reading

How to get more information from slurm?

So the servers you use have Slurm as their job scheduler? Blopig has very good resources to know how to navigate a Slurm environment. 

If you are new to SLURMing, I highly recommend Alissa Hummer’s post . There, she explains in detail what you will need to submit, check or cancel a job, even how to run a job with more than one script in parallel by dividing it into tasks. She is so good that by reading her post you will learn how to move files across the servers, create and manage SSH keys as well as setting up Miniconda and github in a Slurm server.

And Blopig has even more to offer with Maranga Mokaya’s and Oliver Turnbull’s posts as nice complements to have a more advanced use of Slurm. They help with the use of array jobs, more efficient file copying and creating aliases (shortcuts) for frequently used commands.

So… What could I possibly have to add to that?

Well, suppose you are concerned that you or one of your mates might flood the server (not that it has ever happened to me, but just in case).

Helga G. Patak
From heyarnold official https://giphy.com/ page

How would you go by figuring out how many cores are active? How much memory is left? Which GPU does that server use? Fear not, as I have some basic tricks that might help you.

Get information about the servers and nodes:

A pretty straight forward way of getting to know some information on slurm servers is the use of the command:

sinfo -M ALL

Which will give you information on partition names, if that partition is available or not, how many nodes it has, its usage state and a list with those nodes.

CLUSTER: name_of_cluster
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
low         up  7-00:00.0    1  idle  node_name.server.address 

The -M ALL argument is used to show every cluster. If you know the name of the cluster you can use:

sinfo -M name_of_cluster

But what if you want to know not only if it is up and being used, but how much of its resource is free? Fear not, much is there to learn.

You can use the same sinfo command followed by some arguments, that will give you what you want. And the magic command is:

sinfo -o "%all" -M all

This will show you a lot of information abou every partition of every cluster

CLUSTER: name_of_cluster
AVAIL|ACTIVE_FEATURES|CPUS|TMP_DISK|FREE_MEM|AVAIL_FEATURES|GROUPS|OVERSUBSCRIBE|TIMELIMIT|MEMORY|HOSTNAMES|NODE_ADDR|PRIO_TIER|ROOT|JOB_SIZE|STATE|USER|VERSION|WEIGHT|S:C:T|NODES(A/I) |MAX_CPUS_PER_NODE |CPUS(A/I/O/T) |NODES |REASON |NODES(A/I/O/T) |GRES |TIMESTAMP |PRIO_JOB_FACTOR |DEFAULTTIME |PREEMPT_MODE |NODELIST |CPU_LOAD |PARTITION |PARTITION |ALLOCNODES |STATE |USER |CLUSTER |SOCKETS |CORES |THREADS 
A light pink pudgy penguin  angrily saying: That's too much information.
From heyarnold official https://giphy.com/ page

Which is a lot.

So, how can you make it more digestible and filter only the info that you want?

Always start with:

sinfo -M ALL -o "%n" 

And inside the quotations you should add the info you would like to know. The %n arguments serves to show every node, the hostname, in each cluster. If you want to know how much free memory there is in each node you can use:

sinfo -M ALL -o "%n %e"

In case you would like to know how the CPUs are being used (how many are allocated, idle, other and total) you should use

sinfo -M ALL -o "%n %e %C"

Well, I could give more and more examples, but it is more efficient to just leave the table of possible arguments here. They come from slurm documentation.

ArgumentWhat does it do?
%allPrint all fields available for this data type with a vertical bar separating each field.
%aState/availability of a partition.
%ANumber of nodes by state in the format “allocated/idle”. Do not use this with a node state option (“%t” or “%T”) or the different node states will be placed on separate lines.
%bFeatures currently active on the nodes, also see %f.
%BThe max number of CPUs per node available to jobs in the partition.
%cNumber of CPUs per node.
%CNumber of CPUs by state in the format “allocated/idle/other/total”. Do not use this with a node state option (“%t” or “%T”) or the different node states will be placed on separate lines.
%dSize of temporary disk space per node in megabytes.
%DNumber of nodes.
%eThe total memory, in MB, currently free on the node as reported by the OS. This value is for informational use only and is not used for scheduling.
%EThe reason a node is unavailable (down, drained, or draining states).
%fFeatures available the nodes, also see %b.
%FNumber of nodes by state in the format “allocated/idle/other/total”. Note the use of this format option with a node state format option (“%t” or “%T”) will result in the different node states being be reported on separate lines.
%gGroups which may use the nodes.
%GGeneric resources (gres) associated with the nodes. (“Graph Card” that the node uses)
%hPrint the OverSubscribe setting for the partition.
%HPrint the timestamp of the reason a node is unavailable.
%iIf a node is in an advanced reservation print the name of that reservation.
%IPartition job priority weighting factor.
%lMaximum time for any job in the format “days-hours:minutes:seconds”
%LDefault time for any job in the format “days-hours:minutes:seconds”
%mSize of memory per node in megabytes.
%MPreemptionMode.
%nList of node hostnames.
%NList of node names.
%oList of node communication addresses.
%OCPU load of a node as reported by the OS.
%pPartition scheduling tier priority.
%PPartition name followed by “*” for the default partition, also see %R.
%rOnly user root may initiate jobs, “yes” or “no”.
%RPartition name, also see %P.
%sMaximum job size in nodes.
%SAllowed allocating nodes.
%tState of nodes, compact form.
%TState of nodes, extended form.
%uPrint the user name of who set the reason a node is unavailable.
%UPrint the user name and uid of who set the reason a node is unavailable.
%vPrint the version of the running slurmd daemon.
%VPrint the cluster name if running in a federation.
%wScheduling weight of the nodes.
%XNumber of sockets per node.
%YNumber of cores per socket.
%zExtended processor information: number of sockets, cores, threads (S:C:T) per node.
%ZNumber of threads per core.

And there you have it! Now you can know what is going on your slurm clusters and avoid job-blocking your peers.

If you want to know more about slurm, keep an eye on Blopig!

Finding and testing a reaction SMARTS pattern for any reaction

Have you ever needed to find a reaction SMARTS pattern for a certain reaction but don’t have it already written out? Do you have a reaction SMARTS pattern but need to test it on a set of reactants and products to make sure it transforms them correctly and doesn’t allow for odd reactants to work? I recently did and I spent some time developing functions that can:

  1. Generate a reaction SMARTS for a reaction given two reactants, a product, and a reaction name.
  2. Check the reaction SMARTS on a list of reactants and products that have the same reaction name.
Continue reading

Some useful pandas functions

Pandas is one of the most used packages for data analysis in python. The library provides functionalities that allow to perfrom complex data manipulation operations in a few lines of code. However, as the number of functions provided is huge, it is impossible to keep track of all of them. More often than we’d like to admit we end up wiriting lines and lines of code only to later on discover that the same operation can be performed with a single pandas function.

To help avoiding this problem in the future, I will run through some of my favourite pandas functions and demonstrate their use on an example data set containing information of crystal structures in the PDB.

Continue reading

The Antibody Dictionary

Similar to getting lost in a language when moving country, you might encounter a language barrier when moving research fields. This dictionary will guide you in the complex world of immunoinformatics, with a focus on antibodies. Whether your main research will be in this field, you want to apply your machine learning model on antibodies, or you just want to understand the research performed in OPIG, this dictionary will get you started.

The Antibody Dictionary:

Affinity maturation: The optimisation process of naive antibodies to memory antibodies such that the antibody is optimised for a specific antigen. 

Antibody: (immunoglobulin) a Y-shaped molecule important in the adaptive immune system. A canonical antibody consists of two identical heavy chains and two identical smaller light chains. 

Continue reading