Why Pandas feels clunky when coming from R (2024)

(sumsar.net)

83 points | by Tomte 21 hours ago

10 comments

  • dleather 19 hours ago
    I couldn't agree more. I'm fluent in languages like Julia, and MATLAB. I'm 90% fluent in R and prefer data.table over dplyr but working in both is easy enough. The past few months I've been fully transitioning to Python. And while base Python I find to be extremely elegant, typical data science and scientific computing workflows are a headache. There aren't just 1-2 packages to choose from for each use, every package has it's own syntax, keeping track of Pandas Series vs DataFrames is confusion. Want fast differentiable code? Then rewrite everything in numpy in JAX which requires its own tricks.

    What Python desperately needs is a coordinated effort for a core data science /scientific computing stack with a unified framework.

    In my opinion, if it weren't for Python's extensive use in Industry and package ecosystem, Julia would be the language of choice for nearly all data science and scientific computing uses.

    • rich_sasha 2 hours ago
      > What Python desperately needs is a coordinated effort for a core data science /scientific computing stack with a unified framework.

      In fairness, if you're not touching Pandas, it's pretty good I'd say. Everything is based around numpy and scipy. Sklearn API is a bit idiosyncratic but works really nicely in practice and is extensible. JAX has an API which is 1:1 equivalent to numpy, probably with some catches but still. All the trouble starts with pandas.

      Pandas is pretty terrible IMO for all the reasons listed by OP and TFA - and more.

    • hatmatrix 17 hours ago
      > And while base Python I find to be extremely elegant, typical data science and scientific computing workflows are a headache.

      That's my impression as well. Going back to the topic of the original post, pandas only partially implements the idioms of the tidyverse so you have to mix in a lot of different forms of syntax (with lambdas to boot) go get things done. Julia is much nicer, but I find myself using PythonCall more often than I'd like.

      Scipy was originally supposed to provide the scientific computing stack, but then many offshoots in the direction of pandas / ibis / JAX, etc. happened. I guess that's what you get with a community-based language. MATLAB has its warts but MathWorks does manage to present a coherent stack on that end.

  • emehex 18 hours ago
    I haven't seriously used R in nearly a decade but I still miss (and think about) dpylr and the hadleyverse...

    A few years ago I made a package called "redframes" that tried to "solve" all of my frustrations with pandas, make data wrangling feel more like R, while retaining all the best bits of Python...

    Alas, it never really took off. For those curious: https://github.com/maxhumber/redframes

    • doctaj 14 hours ago
      Agreed! I started out doing data analysis in R… switched to Python because it was more multi-purpose at the time (ie: data engineering, analysis, and model deployment)… and I think about it and miss it so often.
    • hatmatrix 17 hours ago
      Hey this looks pretty tidy.

      There is so much hype and luck to widespread adoption, you never know with these things.

  • BDPW 20 hours ago
    I've had a similar experience from the opposite side. I've had quite a few years of experience in Python and had to work in R for an internship during my masters.

    My impression was that it's pretty easy to do straightforward things like the examples described in the article. But when you have to do complicated or unusual things with your data I found it very frustrating to work with. Access to the underlying data was often opague and it was difficult to me at times to figure out what was happening under the hood.

    Does anyone here know any research areas still using R?

    • vharuck 19 hours ago
      As an R user, I get what you mean. If you need to do things that don't fit well in the "tidyverse" model, you have three options:

      1. Wrap the complicated bits in functions, then force it into the tidyverse model by abusing summarize and mutate.

      2. Use data.table. It's very adaptable and handles arbitrary multiline expressions (returning a data.table if the last expression returns a list, otherwise returning the object as-is).

      3. Use base R. It's not as bad as people make it out to be. You'll need to learn it to anyway, if you want to do anything beyond the basics.

    • goosedragons 1 hour ago
      R is very heavily used in Statistics. It's also common in other sciences. I've worked a fair bit with biologists and that's what they're using too for data analysis and visualization.
    • specproc 5 hours ago
      Totally agree, I think this whole conversation is a first language thing. I learned pandas first, and found the whole R ecosystem to be a complete mess.

      So many different types of object, so many different syntaxes. The tidyverse makes sense, and sure, is elegant, but if your colleagues are using base R. Don't even get me started on docs and Stackoverflow for R. I much, and always will prefer Python.

      The one area I still go back to R is on proper survey work. I've looked for years and haven't found anything equivalent to the survey package for Python. I do like that R tends to start from the assumption that data is weighted.

      Fortunately I don't do surveys much anymore.

      • larrled 2 hours ago
        Survey people use r already, or maybe stata, so there isn’t any need for a python package. It’s sort of trivial to implement a jackknife loop in python, if you really had to. A python survey package would not be pythonic.
    • pteetor 19 hours ago
      R is used extensively in quant finance. The quant traders, portfolio managers, and risk managers with whom I work all use R.
    • Tomte 20 hours ago
      Everyone in statistics, and lots of people applying statistics in other disciplines (anthropology etc.).
    • j_bum 20 hours ago
      In addition to stats, R is widely used in computational biology and bioinformatics domains. It’s also widely used in the biopharma industry for a variety of other purposes.
      • mauritsd 20 hours ago
        IME (bioinformatics PhD in the netherlands a number of years ago) it's mostly still preferred in a (pre-)clinical context, not so much in academia itself
    • kgwgk 20 hours ago
      > My impression was that it's pretty easy to do straightforward things like the examples described in the article. But when you have to do complicated or unusual things with your data I found it very frustrating to work with.

      That's where I realised that the "modern" approach was taken in the article - which obviously I had not looked at.

    • tyfon 19 hours ago
      Not really research pr se, but it's used extensively in banking here in Norway for anything from statistical model development to basic analysis and reporting.
  • great_wubwub 20 hours ago
    I have no R experience but have been using Polars instead of Pandas for this sort of stuff and it feels less clunky. How does Polars compare to R?
    • j_bum 20 hours ago
      I strongly prefer `dplyr` and the R stack for table processing and visualization.

      But, recently I’ve been working with much larger scale data than R can handle (thanks to R’s base int32 limitation) and have been needing to use Python instead.

      Polars feels much more intuitive and similar to `dplyr` to me for table processing than Pandas does.

      I often ask my LLM of choice to “translate this dplyr call to Polars” as I’ve been learning the Polars syntax.

      • aydyn 19 hours ago
        It blows my mind that in 2025 R is still limited to 2^31-1 rows. R needs a Python 3.0 moment, but that is unfortunately not going to happen for certain unfortunate but unnecessary reasons.
        • j_bum 19 hours ago
          Yep. I have a deep love/hate relationship with R.

          This is one of those decisions that I just do not understand. In your mind, why do you imagine a set of improvements won’t be made?

          Otherwise, for now, working with Python and R using the reticulate package in Quarto is perfect for my needs.

          If the Positron IDE could get in-line plot visualization in Quarto documents like the RStudio IDE has, I’d be the happiest camper.

          • aydyn 16 hours ago
            > In your mind, why do you imagine a set of improvements won’t be made?

            The problem is not technical. Let's just leave it at that.

            • j_bum 15 hours ago
              Ugh now I am extremely curious. This is a lead at least lol.
  • btown 14 hours ago
    If you're looking for the analytics-world version of this "I wish I had real pipelines rather than dot notation" sentiment, I highly recommend checking out Malloy: https://www.malloydata.dev/

    It's a domain-specific language that makes pipelining a first-class citizen and compiles into various flavors of SQL... but it's also a fully-fleshed out VS Code environment that dynamically checks typing based on live DB schemas, and lets you represent your entire semantic layer in incredibly terse code with type hints and error bars. It's being actively developed and was founded by the former founder of Looker.

    While it's still experimental, it's very usable, particularly if you export the compiled SQL into other BI tools, and visualization tools are being developed incredibly rapidly.

  • ryan-duve 11 hours ago
    I wonder if the author would have thought Pandas feels less clunky if they knew about `.eval`?

        import pandas as pd
    
    
        purchases = pd.read_csv("purchases.csv")
    
        (
            purchases.loc[
                lambda x: x["amount"] < 10 * x.groupby("country")["amount"].transform("median")
            ]
            .eval("total=amount-discount")
            .groupby("country")["total"]
            .sum()
        )
  • dkdcio 20 hours ago
    pandas* per the style guide (nobody follows it)

    also I recommend trying Ibis. created by the creator of pandas originally and solves so many of the issues

    https://ibis-project.org

    • jna_sh 20 hours ago
      Any thoughts on ibis vs polars?
      • gnulinux 20 hours ago
        Disclaimer: Never used Ibis before but I daily use polars and DuckDB.

        It seems like Ibis uses DuckDB on its backend (by default) and has Polars support as well. Given this, maybe see if Ibis works better for you than polars. If you very specifically need polars, using that will for sure be better. DuckDB is faster than polars and it has great polars support, so depending on how Ibis is implemented it might be "better" than polars as data frame lib.

        • orlp 13 hours ago
          > DuckDB is faster than polars

          Whether or not DuckDB is faster than Polars depends on the query and data size. I've spent a large portion of the last 2 years building a new execution engine for and optimizing Polars, and it shows: https://pola.rs/posts/benchmarks/.

    • Vaslo 17 hours ago
      There is also Nahwhals.

      https://pypi.org/project/narwhals/#description

      I tried really hard to use Ibis but I ran into issues where it was way easier to do some stuff in pandas/polars and had to keep coming out of Ibis to make it work so I gave up on it for the time being.

  • __mharrison__ 17 hours ago
    Lots of readers of my book, Effective Pandas, say it helps them feel more like they are used to with R...

    (I've never used R myself, but certainly have some very strong opinions about Pandas after having written 3 books about it.)

  • wodenokoto 19 hours ago
    A really, really big part of this is thanks to RStudio, which, when you run a line and write a line will peek into memory to see what the columns in your dataframe is and understand the dplyr DSL to help you auto complete what essentially is non-existing variables.
  • smabie 18 hours ago
    The original sin of Pandas is row indices
    • hatmatrix 17 hours ago
      Actually I like that you can use it as a dictionary of tuples (i.e., rows).
    • Vaslo 17 hours ago
      One of the big benefits of polars over pandas is not dealing with the constant index nonsense. Can’t tell you all of the issues I had as a beginner with pandas trying to debug silly index errors.