Spark in Microsoft Fabric... some doubts
I'm taking the 30-day Spark course, but I still have a few doubts, which are as follows: DataFrame filtering To read the contents of a table into a dataframe we can write: df = spark.sql( โSELECT * FROM SparkSetember.propertysales LIMIT 1000โ Subsequently, we can, for example, Filter the table with the function: df.filter(df.City.startswith(โLโ)).show() My question is this: Why don't we do it straight away: df = spark.sql( โSELECT * FROM SparkSetember.propertysales WHERE Citi Like โL%โ โ