Sort
Profile photo for Joachim Pense

Some complaints from the top of my head:

- SQL is generally awkward to format due to the syntax.
- You have to repeat a select clause in the "group by" clause without a referencing possibility.
- In an update statement, tables appear first, then columns; in queries it is the the other way round (because SQL syntax is driven by natural language). I would like them to conform so it's easier to do quick checks/tests.
- Expressions that occur several times in a query (e.g. conditions that occur in both branches of a union) must be typed twice, with no abstraction possibility.

Profile photo for Dan Lenski

The fact that you can't rearrange "clauses" in select statements can drive me a little bit crazy.

When I have to write a query like this...

  1. SELECT ft.id, bt.id, func(col3) 
  2. FROM foo_table ft 
  3. LEFT JOIN bar_table bt on (ft.frob=bt.baz) 
  4. JOIN other_table ot on (bt.baz=ot.whatsit) 
  5. WHERE ft.wee in (1,2,3) 
  6. GROUP BY ft.id, bt.id 


Usually the output columns (results) are
not what I think of first. Often the tables and the join conditions are what I think about first. Sometimes the grouping or filtering is what I think about first. I'm always having to jump around and go back and edit things out-of-order when wr

The fact that you can't rearrange "clauses" in select statements can drive me a little bit crazy.

When I have to write a query like this...

  1. SELECT ft.id, bt.id, func(col3) 
  2. FROM foo_table ft 
  3. LEFT JOIN bar_table bt on (ft.frob=bt.baz) 
  4. JOIN other_table ot on (bt.baz=ot.whatsit) 
  5. WHERE ft.wee in (1,2,3) 
  6. GROUP BY ft.id, bt.id 


Usually the output columns (results) are
not what I think of first. Often the tables and the join conditions are what I think about first. Sometimes the grouping or filtering is what I think about first. I'm always having to jump around and go back and edit things out-of-order when writing a query... and it gets messier when there are subqueries.

I wish I could rearrange a big query like this, for example...

  1. SELECT FROM foo_table ft 
  2. LEFT JOIN bar_table bt on (ft.frob=bt.baz) 
  3. JOIN other_table ot on (bt.baz=ot.whatsit) 
  4. WHERE ft.wee in (1,2,3) 
  5. OUTPUT ft.id, bt.id, func(col3) 
  6. GROUP BY ft.id, bt.id 


The
SQLAlchemy ORM for Python does allow building up SQL queries in mostly-arbitrary order, which is one of my favorite features :)

Profile photo for Fiverr

The best freelance digital marketers can be found on Fiverr. Their talented freelancers can provide full web creation, or anything Shopify on your budget and deadline. If you’re looking for someone who can do Magento, Fiverr has the freelancers qualified to do so. If you want to do Dropshipping, PHP, or, GTmetrix, Fiverr can help with that too. Any digital marketing help you need Fiverr has freelancers qualified to take the reins. What are you waiting for? Start today.

Profile photo for Assistant
Assistant

One of the most commonly cited annoyances about SQL syntax is its verbosity and the need for precision, which can lead to frustration, especially when dealing with complex queries. Here are a few specific issues:

  1. Case Sensitivity: SQL can be case-sensitive depending on the database system, which can lead to errors if identifiers are not consistently cased.
  2. Joins: Writing complex joins can become cumbersome, especially when dealing with multiple tables. The syntax can get messy and hard to read.
  3. Error Messages: SQL error messages can sometimes be vague, making it difficult to pinpoint the exact is

One of the most commonly cited annoyances about SQL syntax is its verbosity and the need for precision, which can lead to frustration, especially when dealing with complex queries. Here are a few specific issues:

  1. Case Sensitivity: SQL can be case-sensitive depending on the database system, which can lead to errors if identifiers are not consistently cased.
  2. Joins: Writing complex joins can become cumbersome, especially when dealing with multiple tables. The syntax can get messy and hard to read.
  3. Error Messages: SQL error messages can sometimes be vague, making it difficult to pinpoint the exact issue in a query.
  4. Lack of Standardization: Different database systems (like MySQL, PostgreSQL, SQL Server, etc.) have variations in syntax and functions, which can lead to compatibility issues when switching between systems.
  5. Subqueries: Writing subqueries can lead to performance issues and can make the overall query harder to read and maintain.
  6. Data Type Handling: Dealing with different data types and conversions can be tricky, especially when passing parameters between different SQL functions.

These challenges can slow down development and make debugging more time-consuming.

Profile photo for Jerry Cruise

Apparently, one of the original purposes of SQL is to make data query processing easy. The language uses many English-like terms and syntax in an effort to make it easy to learn, particularly for non-IT people. Simple SQL statements read like English, and even people without any programming experience can write them.

However, the language becomes clumsy as query needs become even slightly more complicated. It often needs hundreds of rows of multilevel nested statements to achieve a computing task. Even professional programmers often find it hard to write, let alone the non-IT people. As a resul

Apparently, one of the original purposes of SQL is to make data query processing easy. The language uses many English-like terms and syntax in an effort to make it easy to learn, particularly for non-IT people. Simple SQL statements read like English, and even people without any programming experience can write them.

However, the language becomes clumsy as query needs become even slightly more complicated. It often needs hundreds of rows of multilevel nested statements to achieve a computing task. Even professional programmers often find it hard to write, let alone the non-IT people. As a result, such computing tasks become the popular main question in programmer recruitment tests of many software companies. In real-world business situations, the size of SQL code for report queries is usually measured by KBs. The several-line SQL statements only exist in programming textbooks and training courses.

SQL problem analysis

What is the reason behind the sheer bulk of the code? Let’s try to find the answer, that is, SQL’s weaknesses, through an example.

Suppose we have sales performance table sales_amount consisting of three fields (date information is omitted to make the analysis simpler):

We are trying to find salespeople whose sales amounts rank in top 10 in terms of both air conditioners and TV sets.

The task is not difficult. It is easy for us to think of the following natural computing process:

1. Sort the sales performance table by sales amount of air conditioners and get the top 10;

2. Sort the sales performance table by sales amount of TV sets and get the top 10;

3. Perform intersection operation on step 1 and step 2’s result sets to get the final result.

To implement the algorithm in SQL:

1. Find salespeople whose sales amounts rank in top 10 for air conditioners. The code is fairly simple:

  1. select top 10 sales from sales_amount where product='AC' order by amount desc 

2. Find salespeople whose sales amounts rank in top 10 for TV sets. Same action is performed:

  1. select top 10 sales from sales_amount where product='TV' order by amount desc 

3. Calculate the intersection between result sets of step 1 and step 2. As SQL does not support step-by-step coding, result sets of the previous two steps cannot be retained and the statements need to be copied. Here the code becomes a little complicated.

  1. select * from 
  2. ( select top 10 sales from sales_amount where product='AC' order by amount desc ) 
  3. intersect 
  4. ( select top 10 sales from sales_amount where product='TV' order by amount desc ) 

This is just a 3-step simple computation, but there are a large number of computations in the real-world analytic scenarios that involve about a dozen of steps. We can imagine the complexity degree of their code.

Do not support step-by-step coding – that is the first major SQL weakness. A stepwise procedure for approaching a complex computation can considerably reduce coding complexity. And trying to approach one wholesale will only make it much harder or more complicated to code.

We can just imagine how much a primary school student would feel frustrated when they were required to solve a word problem using only one formula (though there are always smart children who can work it out).

SQL does not support step-by-step coding, but stored procedures written in SQL can. So, can stored procedures handle the above task conveniently and simply?

Let’s just ignore the complex technological environment stored procedures require and the incompatibility caused by differences of databases, and look at how, in theory, the SQL stored procedure simplifies the computation through stepwise coding.

1. Find salespeople whose sales amounts rank in top 10 for air conditioners. The statement is the same as above, but its result set will be stored specifically for use in step 3. As SQL can only store a set of data with a table, we need to create a temporary table:

  1. create temporary table x1 as 
  2. select top 10 sales from sales_amount where product='AC' order by amount desc 

2. Find salespeople whose sales amounts rank in top 10 for TV sets using the similar statement:

  1. create temporary table x2 as 
  2. select top 10 sales from sales_amount where product='TV' order by amount desc 

3. Calculate intersection of result sets of step 1 and step 2. The complicated first two steps lead to a simple last step:

  1. select * from x1 intersect x2 

Coding step by step displays clear thinking but the temporary tables make the computing process complicated still. The intermediate temporary result sets are common in batch structured data computations. If we create a temporary table for each one, efficiency will be low and code will be nonintuitive.

And SQL does not allow set-type field values, such as temporary tables. This makes some computations not achievable even if we are willing to endure the complex processing.

Suppose we need to find salespeople whose sales amounts for all products rank in top 10, it’s easy to think up the following algorithm based on that for the previous task:

1. Group the original table by product, sort each group, and find top 10 records meeting the specified condition in each group;

2. Calculate intersection of all top 10 records.

As we do not know how many products there are in advance, we need to store the grouping result in a temporary table, where one field will store members in each group. But as SQL does not support set-type values, the solution becomes infeasible.

If we have window functions at hand, we can switch to another route. It will group the original table by product, calculate the number of appearances of every salesperson in the top 10 sales amounts of each group, and find those whose total appearances are equal to the number of products – they are the ones whose sales amounts rank in top 10 for all products.

  1. select sales 
  2. from ( select sales, 
  3. from ( select sales, 
  4. rank() over (partition by product order by amount desc ) ranking 
  5. from sales_amount) 
  6. where ranking <=10 ) 
  7. group by sales 
  8. having count(*)=(select count(distinct product) from sales_amount) 

But such a SQL query is beyond most users.

Even then, not all databases support window functions. In that case, we can only write loops using the stored procedure to circularly get the top 10 salespeople in terms of sales amounts in each product, and calculate intersection between the current result set and the previous one. The process is not simpler than the program written in a high-level language, and still involves the complexity of creating temporary tables.

The above shows the second SQL weakness – insufficient set-orientation. Though SQL has the concept of sets, it does not offer them as a basic data type, resulting in roundabout algorithms and code for the large number of set-oriented calculations.

The keyword top is used in the above SQL sample programs. Actually, there isn’t such an operator in relational algebra (but it can be constructed using a series of other operations), and the code is not standard SQL.

Let me show you how difficult it is when the top keyword is not available for finding top N.

Here’s the general way of thinking: For each member, get the number of members where the sales amounts are greater than the current amount, define ranking of the current salesperson according to the number, and get members whose rankings are not greater than 10. Below is the SQL query:

  1. select sales 
  2. from ( select A.sales sales, A.product product, 
  3. (select count(*)+1 from sales_amount 
  4. where A.product=product AND A.amount<=amount) ranking 
  5. from sales_amount A ) 
  6. where product='AC' AND ranking<=10 

Or

  1. select sales 
  2. from ( select A.sales sales, A.product product, count(*)+1 ranking 
  3. from sales_amount A, sales_amount B 
  4. where A.sales=B.sales and A.product=B.product AND A.amount<=B.amount 
  5. group by A.sales,A.product ) 
  6. where product='AC' AND ranking<=10 

Even professional programmers find it hard to write. The code is too complicated for such a simple top 10 computation.

Even if SQL supports the keyword top, it can only solve top N problem conveniently. If the problem becomes a bit more complex, such as getting members/values from the 6th to the 10th and finding salespeople whose sales amounts are 10% higher than their directly next, difficulty still exists.

This is due to SQL’s another key weakness – lack of order-based syntax. SQL inherits mathematical unordered sets, which is the direct cause of difficulties in handling order-based calculations that are prevalent in real-world business situations (such as calculating link relative ratio, YOY, top 20%, and rankings).

SQL2003 standard adds window functions to try to improve the computing ability for dealing with order-based calculations. They have enabled simpler solutions to the above computing tasks and helped mitigate this SQL problem. However, the use of window functions is usually accompanied by nested queries, and the inability to let users access members of a set directly according to their positions leaves many order-based calculations hard to solve.

Suppose we are trying to find the gender ratio among the above top salespeople by calculating the number of females and that of males. Generally, the gender information of salespeople is recorded in employee table instead of the sales performance table, as shown below:

As the list of top salespeople is available, our first thought might be finding their genders from the employee table and then count the numbers. To achieve this cross-table query, SQL needs a table join. So, the SQL code following the above top 10 task is:

  1. select employee.gender,count(*) 
  2. from employee, 
  3. ( ( select top 10 sales from sales_amount where product='AC' order by amount desc ) 
  4. intersect 
  5. ( select top 10 sales from sales_amount where product='TV' order by amount desc ) ) A 
  6. where A.sales=employee.name 
  7. group by employee.gender 

Only one table join has already made the code complicated enough. In fact, related information is, on many occasions, stored in multiple tables and often of multilevel structure. For instance, salespeople have their departments and the latter has managers, and we might want to know the managers under whom those top salespeople work. A three-table join is needed to accomplish this, and it is not easy to write smooth and clear WHERE and GROUP for this join.

Now we find out the fourth SQL weakness – lack of object reference mechanism. In relational algebra, the relationship between objects is maintained purely by foreign keys match. This results in slow data searching and the inability to treat the member record in the related table pointed by the foreign key directly as an attribute of the current record. Try rewriting the above SQL as follows:

  1. select sales.gender,count(*) 
  2. from (…) // … is the SQL statement for getting the top 10 records of salespeople 
  3. group by sales.gender 

Apparently, this query is clearer and will be executed more efficiently (as there are no joins).

The four SQL key weaknesses shown through a simple example are causes of hard to write and lengthy SQL statements. The process of solving business problems based on a certain computational system is one that expresses an algorithm with the syntax of a formalized language (like solving word problems in primary school by transforming them into formalized four arithmetic operations). The SQL defects are great obstacles to translation of solutions computing problems. In extreme cases, the strangest thing happens – the process of converting algorithms to syntax of a formalized language turns out to be much harder and more complicated than finding a solution.

In other words, using SQL to compute data is like using an assembly language to accomplish four arithmetic operations – which might be easier to understand for programmers. A simple formula like 3+5*7 will become as follows if it is written in an assembly language, say X86:

  1. mov ax,3 
  2. mov bx,5 
  3. mul bx,7 
  4. add ax,bx 

Compared with the simple formula 3+5*7, the above code is complicated to write and hard to read (it is even more difficult when fractions are involved). Though it may be not a big deal for veteran programmers, it is almost unintelligible for most business people. In this regard, FORTRAN is a great invention.

Our examples are simple because I want you to understand my point easily. But real-world computing tasks are far more complicated, and users will face various SQL difficulties. Several more lines here and a few more lines there, it is therefore no wonder that SQL generates multilevel nested statements of hundreds of lines for a slightly complicated task. What’s worse, often the hundreds of lines of code are a single statement, making it hard to debug in terms of engineering aspect and increasing difficulty in handling complex queries.

More examples

Let’s look at SQL problems through more examples.

In order to simplify the SQL statement as much as possible, the above sample programs use many window functions and thus the Oracle syntax that supports window functions well. Syntax of the other databases will only make the SQL statement more complicated.

Even for these simple tasks that are common in daily analytic work, SQL is already sufficiently hard to use.

Non-stepwise coding

Coding a complex computation step by step can considerably reduce the complexity, while trying to accomplish it wholesale will only make it even more difficult.

Task 1: Find the number of female employees who come from Beijing in the sales department.

Count employees in the sales department:

  1. select count(*) from employee where department='sales' 

Count employees who are Beijingers:

  1. select count(*) from employee where department='sales' and native_place='Beijing' 

Count female employees who are Beijingers:

  1. select count (*) from employee 
  2. where department='sales' and native_place='Beijing' and gender='female' 

The natural solution is to select records of employees in sales department and count them, get employees who are Beijingers and count them, and then find female employees and count the number. Each query is based on result of the previous query, generating simple and efficient code. Yet, SQL’s non-stepwise coding cannot reference the previous result set for use in the current query, and writes the query condition for each query.

Task 2: Select a female employee and a male one from each department to form a team to take par in the game.

  1. with A as 
  2. (select name, department, 
  3. row_number() over (partition by department order by 1) seq 
  4. from employee where gender=‘male’) 
  5. B as 
  6. (select name, department, 
  7. row_number() over(partition by department order by 1) seq 
  8. from employee where gender=‘female’) 
  9. select name, department from A 
  10. where department in ( select distinct department from B ) and seq=1 
  11. union all 
  12. select name, department from B 
  13. where department in (select distinct department from A ) and seq=1 

The non-stepwise coding leads to bloated code and low computing efficiency, as well as extremely roundabout algorithms sometimes.

Here is the intuitive solution. Loop through each department, from which a female employee and a male employee are selected to add to the result set if the department has employees of both genders. SQL does not support getting the final result set step by step (unless it turns to the stored procedure). It will, in this case, switch to a hard route. It selects male employees and female employees respectively from each department, get members from each result set whose departments exist in the other, and finally, union them.

If it were not for WITH subclause and window functions, the SQL statement would be unreadable.

Unordered sets

Order-based calculations are prevalent in batch data processing (such as getting top 3 or record/value in 3rd position, and calculating link relative ratio). SQL switches to an unusual way of thinking and take a circuitous route because it cannot perform such a calculation directly thanks to its inheritance of the concept of mathematical unordered sets.

Task 3: Find employees whose ages are equal to the median.

  1. select name, birthday 
  2. from (select name, birthday, row_number() over (order by birthday) ranking 
  3. from employee ) 
  4. where ranking=(select floor((count(*)+1)/2) from employee) 

Median calculation is common, and the process is simple. We just need to sort the original set and get the member at the middle position. SQL’s unordered-sets-based computational mechanism does not offer position-based member access method. It will invent a field of sequence number and select the eligible members through a conditional query, where subqueries are unavoidable.

Task 4: Find the largest number days when a stock rises consecutively.

  1. select max (consecutive_day) 
  2. from (select count(*) (consecutive_day 
  3. from (select sum(rise_mark) over(order by trade_date) days_no_gain 
  4. from (select trade_date, 
  5. case when 
  6. closing_price>lag(closing_price) over(order by trade_date) 
  7. then 0 else 1 END rise_mark 
  8. from stock_price) ) 
  9. group by days_no_gain) 

Unordered sets also lead to tortuous ways of solving problems.

Here is the general way of doing the task. Set a temporary variable to record the number of consecutive rising days with the initial value as 0, compare the current closing price with the previous one, reset the variable’s the current value as 0 if the price does not rise and add 1 to it if the price rises, and get the largest number when the loop is over.

SQL cannot express the algorithm and it gives an alternative, which first counts the non-rising frequency for each date from the initial one to the current one. The dates that have the same frequency contain prices rising consecutively. Then it groups these dates to get continuously rising intervals, counts members in each, and finds the largest number. It is extremely difficult to understand and even more hard to express.

Insufficient set-orientation

There is no doubt that sets are the basis of batch data processing. SQL is a set-oriented language, but it can only express simple result sets and does not make it a basic data type to extend its application.

Task 5: Find employees whose birthdays are on the same date.

  1. select * from employee 
  2. where to_char (birthday, ‘MMDD’) in 
  3. ( select to_char(birthday, 'MMDD') from employee 
  4. group by to_char(birthday, 'MMDD') 
  5. having count(*)>1 ) 

The original purpose of grouping a set is to divide it into multiple subsets, so a grouping operation should have returned a set of subsets. However, SQL cannot express such a “set of sets” and thus cannot help forcing an aggregate operation on the subsets to return a regular result set.

At times, what we need isn’t aggregate values but the subsets themselves. To do this, SQL will query the original set again according to the grouping condition, which unavoidably results in a nested query.

Task 6: Find students whose scores of all subjects rank in top 10.

  1. select name 
  2. from (select name 
  3. from (select name, 
  4. rank() over(partition by subject order by score DESC) ranking 
  5. from score_table) 
  6. where ranking<=10) 
  7. group by name 
  8. having count(*)=(select count(distinct subject) from score_table) 

The set-oriented solution is to group data by subject, sort each subset by score, select top 10 from each subset, and calculate intersection between the subsets. As SQL’s inability to phrase “a set of sets” and support intersection operations on an indefinite number of sets, the language takes an unusual route to achieve the task. It finds top 10 scores in terms of subjects using a window function, group the result set by student, and find the group where the number of students is equal to the number of subjects. The process is hard to understand.

Lack of object reference method

A SQL reference relationship between data tables is maintained through matching foreign key values. Records pointed by these values cannot be used directly as an attribute of the corresponding records in the other table. Data query needs a multi-table join or a subquery, which is complicated to code and inefficient to run.

Task 7: Find male employees whose managers are female.

Through multi-table join:

  1. select A.* 
  2. from employee A, department B, employee C 
  3. where A.department=B.department and B.manager=C.name and 
  4. A.gender='male' and C.gender='female' 

Through subquery:

  1. select * from employee 
  2. where gender='male' and department in 
  3. (select department from department 
  4. where manager in 
  5. (select name from employee where gender='female')) 

If the department field of the employee table is the foreign key pointing to records of the department table and the manager field of the department table is the foreign key that points to records of the employee table, the query condition can be written in the following simple, intuitive and efficient way:

  1. where gender='male' and department.manager.gender='female' 

SQL can only use a multi-table join or a subquery to generate difficult-to-understand statements.

Task 8: Find the companies where employees obtained their first jobs.

Through multi-table join:

  1. select name, company, first_company 
  2. from (select employee.name name, resume.company company, 
  3. row_number() over(partition by resume. name 
  4. order by resume.start_date) work_seq 
  5. from employee, resume where employee.name = resume.name) 
  6. where work_seq=1 

Through subquery:

  1. select name, 
  2. (select company from resume 
  3. where name=A.name and 
  4. start date=(select min(start_date) from resume 
  5. where name=A.name)) first_company 
  6. from employee A 

SQL is also unable to treat the sub table as an attribute (field) of the primary table because it lacks object reference method and has inadequate set-orientation. A query on the sub table uses either a multi-table join, which makes the statement particularly complex and needs to align the result set to records of the primary table in a one-to-one relationship through a filtering or grouping operation (since records of the joining result set has such a relationship with the sub table), or a subquery that calculates ad hoc the subset of records of the sub table related to each record in the primary table one by one – which increases amount of computations (the subquery cannot use the WITH subclause) and coding difficulty.

SPL as the solution

SQL problems need to have a solution.

Actually, the above analysis implies a way out. That is, designing a new language that gets rid of those SQL weaknesses.

And this is the original intention of creating SPL.

SPL is the abbreviation for Structured Process Language while SQL’s full name is Structured Query Language. It is an open-source programming language intended to facilitate structured data computations. SPL emphasizes step-by-step coding, and supports ordered sets and object reference method to achieve complete set-orientation, reducing the difficulty of “algorithm translation” sharply.

Here we just present SPL code of the 8 tasks in the previous section, giving you a glance of the language’s elegance and conciseness.

Task 1

SPL keeps a set of records as an intermediate variable, enabling a step-by-step query process.

Task 2

SPL obtains the final result step by step using its own efficient programming logic.

Task 3

Task 4

It is easy for SPL to code an intuitive and direct algorithm.

Task 5

SPL keeps result set of the grouping operation to further process it as it handles a regular set.

Task 6

SPL writes the code smoothly as the intuitive algorithm unfolds.

Task 7

With the support of object reference, it is convenient for SPL to access a field of the record pointed by the foreign key as it gets one of its original fields.

Task 8

SPL allows treating a set of records of the sub table as a field of the primary table and accesses it in the same way of getting its other fields, avoiding repeated computations on the sub table.

SPL has an intuitive IDE that offers convenient debug functionalities to track each step for processing a query, making coding even easier.

For a computation within an application, SPL offers the standard JDBC driver to be integrated with the application, such as JAVA, as SQL does:

  1.  
  2. Class.forName("com.esproc.jdbc.InternalDriver"); 
  3. Connection conn =DriverManager.getConnection("jdbc:esproc:local://"); 
  4. Statement st = connection.(); 
  5. CallableStatement st = conn.prepareCall("{call xxxx(?,?)}"); 
  6. st.setObject(1, 3000); 
  7. st.setObject(2, 5000); 
  8. ResultSet result=st.execute(); 
  9.  
Profile photo for Barry McConnell

Because it doesn't conform to set theory, the answers are not provably correct

Profile photo for Johnny M

Most car insurance companies are kind of banking on you not noticing that they’re overcharging you. But unlike the olden days where everything was done through an agent, there are now several ways to reduce your insurance bills online. Here are a few ways:

1. Take 2 minutes to compare your rates

Here’s the deal: your current car insurance company is probably charging you more than you should be paying. Don’t waste your time going from one insurance site to another trying to find a better deal.

Instead, use a site like Coverage.com, which lets you compare all of your options in one place.

Coverage.

Most car insurance companies are kind of banking on you not noticing that they’re overcharging you. But unlike the olden days where everything was done through an agent, there are now several ways to reduce your insurance bills online. Here are a few ways:

1. Take 2 minutes to compare your rates

Here’s the deal: your current car insurance company is probably charging you more than you should be paying. Don’t waste your time going from one insurance site to another trying to find a better deal.

Instead, use a site like Coverage.com, which lets you compare all of your options in one place.

Coverage.com is one of the biggest online insurance marketplaces in the U.S., offering quotes from over 175 different carriers. Just answer a few quick questions about yourself and you could find out you’re eligible to save up to $600+ a year - here.

2. Use your driving skills to drop your rate

Not every company will do this, but several of the major brand insurance companies like Progressive, Allstate, and Statefarm offer programs that allow you to use a dash cam, GPS, or mobile app to track your driving habits and reduce your rates. You just have to do it for a month typically and then they’ll drop your rate.

You can find a list of insurance companies that offer this option - here.

3. Fight speeding tickets and traffic infractions

A lot of people don’t realize that hiring a lawyer to fight your traffic violations can keep your record clean. The lawyer fee oftentimes pays for itself because you don’t end up with an increase in your insurance. In some cities, a traffic lawyer might only cost $75 per infraction. I’ve had a few tickets for 20+ over the speed limit that never hit my record. Keep this in mind any time you get pulled over.

4. Work with a car insurance company that rewards you for your loyalty

Sticking with the same car insurance provider should pay off, right? Unfortunately, many companies don’t truly value your loyalty. Instead of rewarding you for staying with them, they quietly increase your rates over time.

But it doesn’t have to be this way. Some insurers actually reward long-term customers with better deals and additional perks. By switching to a company that values loyalty - like one of the loyalty rewarding options on this site - you can enjoy real benefits, like lower premiums, better discounts, and added coverage options tailored just for you.

5. Find Out If Your Car Insurance Has Been Overcharging You

You can’t count on your car insurance provider to give you the best deal—they’re counting on you not checking around.

That’s where a tool like SavingsPro can help. You can compare rates from several top insurers at once and let them pitch you a better price.

Did you recently move? Buy a new car? Get a little older? These changes can mean better rates, and SavingsPro makes it easy to see if switching providers could save you money.

All it takes is a few minutes to answer these questions about your car and driving habits. You’ll quickly see if it’s time to cancel your current insurance and switch to a more affordable plan.

These are small, simple moves that can help you manage your car insurance properly. If you'd like to support my work, feel free to use the links in this post—they help me continue creating valuable content. Alternatively, you can search for other great options through Google if you prefer to explore independently.

Profile photo for Bob Purvy

They don’t. So there you go.

SQL is God’s gift to programmers. Any database you’ll ever have to use supports some subset of SQL, and there are standards bodies making sure that it’s done according to the rules.

Furthermore, it is not constantly changing, like some other languages I could name. You can buy a 20-year-old book about it, and most of it still applies.

Profile photo for Simon Gardner

That the syntax for insert and update statements is different.

Also that there is no "forced insert" option (ie you have to know in advance whether a given row exists before you execute an insert or update)

eg:

IF EXISTS(SELECT * from myTable WHERE id = @myid) THEN
UPDATE myTable SET myField = @myValue WHERE id = @myId
END
ELSE
INSERT (id, myField) INTO myTable VALUES (@myid, @myValue)
END

rather than just say

WRITE (id, myField) INTO myTable VALUES (@myid, @myValue)

and have SQL work out whether it needs to do an insert or update

Profile photo for Quora User

Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.

And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.

Don’t wait like I did. Go ahead and start using these money secrets today!

1. Cancel Your Car Insurance

You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily,

Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included.

And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone.

Don’t wait like I did. Go ahead and start using these money secrets today!

1. Cancel Your Car Insurance

You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily, this problem is easy to fix.

Don’t waste your time browsing insurance sites for a better deal. A company called Insurify shows you all your options at once — people who do this save up to $996 per year.

If you tell them a bit about yourself and your vehicle, they’ll send you personalized quotes so you can compare them and find the best one for you.

Tired of overpaying for car insurance? It takes just five minutes to compare your options with Insurify and see how much you could save on car insurance.

2. You Can Become a Real Estate Investor for as Little as $10

Take a look at some of the world’s wealthiest people. What do they have in common? Many invest in large private real estate deals. And here’s the thing: There’s no reason you can’t, too — for as little as $10.

An investment called the Fundrise Flagship Fund lets you get started in the world of real estate by giving you access to a low-cost, diversified portfolio of private real estate. The best part? You don’t have to be the landlord. The Flagship Fund does all the heavy lifting.

With an initial investment as low as $10, your money will be invested in the Fund, which already owns more than $1 billion worth of real estate around the country, from apartment complexes to the thriving housing rental market to larger last-mile e-commerce logistics centers.

Want to invest more? Many investors choose to invest $1,000 or more. This is a Fund that can fit any type of investor’s needs. Once invested, you can track your performance from your phone and watch as properties are acquired, improved, and operated. As properties generate cash flow, you could earn money through quarterly dividend payments. And over time, you could earn money off the potential appreciation of the properties.

So if you want to get started in the world of real-estate investing, it takes just a few minutes to sign up and create an account with the Fundrise Flagship Fund.

This is a paid advertisement. Carefully consider the investment objectives, risks, charges and expenses of the Fundrise Real Estate Fund before investing. This and other information can be found in the Fund’s prospectus. Read them carefully before investing.

3. Stop Paying Your Credit Card Company

If you have credit card debt, you know. The anxiety, the interest rates, the fear you’re never going to escape… but a website called AmONE wants to help.

If you owe your credit card companies $100,000 or less, AmONE will match you with a low-interest loan you can use to pay off every single one of your balances.

The benefit? You’ll be left with one bill to pay each month. And because personal loans have lower interest rates (AmONE rates start at 6.40% APR), you’ll get out of debt that much faster.

It takes less than a minute and just 10 questions to see what loans you qualify for.

4. Earn $1000/Month by Reviewing Games and Products You Love

Okay, real talk—everything is crazy expensive right now, and let’s be honest, we could all use a little extra cash. But who has time for a second job?

Here’s the good news. You’re already playing games on your phone to kill time, relax, or just zone out. So why not make some extra cash while you’re at it?

With KashKick, you can actually get paid to play. No weird surveys, no endless ads, just real money for playing games you’d probably be playing anyway. Some people are even making over $1,000 a month just doing this!

Oh, and here’s a little pro tip: If you wanna cash out even faster, spending $2 on an in-app purchase to skip levels can help you hit your first $50+ payout way quicker.

Once you’ve got $10, you can cash out instantly through PayPal—no waiting around, just straight-up money in your account.

Seriously, you’re already playing—might as well make some money while you’re at it. Sign up for KashKick and start earning now!

5. Earn Up to $50 this Month By Answering Survey Questions About the News — It’s Anonymous

The news is a heated subject these days. It’s hard not to have an opinion on it.

Good news: A website called YouGov will pay you up to $50 or more this month just to answer survey questions about politics, the economy, and other hot news topics.

Plus, it’s totally anonymous, so no one will judge you for that hot take.

When you take a quick survey (some are less than three minutes), you’ll earn points you can exchange for up to $50 in cash or gift cards to places like Walmart and Amazon. Plus, Penny Hoarder readers will get an extra 500 points for registering and another 1,000 points after completing their first survey.

It takes just a few minutes to sign up and take your first survey, and you’ll receive your points immediately.

6. Earn as Much as $1K/Month Doing Simple Online Tasks

Is there such a thing as easy money? If you know your way around the web, there certainly is.

That’s because data is currency these days, and many companies are willing to pay cash for it — up to $1,000 per month.

Finding these companies can be time-consuming on your own. But a company called Freecash has compiled all sorts of quick cash tasks from about a dozen advertisers and market research companies thirsty for more data. Freecash has paid out over $13 million to users since 2019.

You can pick and choose your tasks and complete them at your convenience. The coins you earn from each completed task can be converted into things like Visa gift cards, Amazon gift cards, cryptocurrency or cold-hard PayPal cash.

Signing up for a Freecash account is easy and there’s no minimum amount you need to earn before you can cash out. And if you’ve got enough free time on your hands, you can join the ranks of Freecash users making more than $1,000 a month in extra cash.

Sign up here to see how much you could earn.

7. Ask This Company to Get a Big Chunk of Your Debt Forgiven

A company called National Debt Relief could convince your lenders to simply get rid of a big chunk of what you owe. No bankruptcy, no loans — you don’t even need to have good credit.

If you owe at least $10,000 in unsecured debt (credit card debt, personal loans, medical bills, etc.), National Debt Relief’s experts will build you a monthly payment plan. As your payments add up, they negotiate with your creditors to reduce the amount you owe. You then pay off the rest in a lump sum.

On average, you could become debt-free within 24 to 48 months. It takes less than a minute to sign up and see how much debt you could get rid of.

8. Get Up to $300 Just for Setting Up Direct Deposit With This Account

If you bank at a traditional brick-and-mortar bank, your money probably isn’t growing much (c’mon, 0.40% is basically nothing).

But there’s good news: With SoFi Checking and Savings (member FDIC), you stand to gain up to a hefty 3.80% APY on savings when you set up a direct deposit or have $5,000 or more in Qualifying Deposits and 0.50% APY on checking balances — savings APY is 10 times more than the national average.

Right now, a direct deposit of at least $1K not only sets you up for higher returns but also brings you closer to earning up to a $300 welcome bonus (terms apply).

You can easily deposit checks via your phone’s camera, transfer funds, and get customer service via chat or phone call. There are no account fees, no monthly fees and no overdraft fees. And your money is FDIC insured (up to $3M of additional FDIC insurance through the SoFi Insured Deposit Program).

It’s quick and easy to open an account with SoFi Checking and Savings (member FDIC) and watch your money grow faster than ever.

Read Disclaimer

Profile photo for Federico Razzoli

Only details.

  • The list of expressions in SELECT is in a wrong place. We can say “the SELECT clause of the SELECT statement”, which is frankly ridiculous.
  • 2 types of NULL should exist. Somewhere, NULL semantics clearly mean “no value”, but other times they clearly mean “unknown value”. If the meaning is not clear, it’s better to have no NULL at all.
  • Columns should not be NULLable by default.
    • But having only the primary key column non-NULLable by default is even worse.
  • No simple syntax to change a single characteristic of a column (ie: NULLable, comment…).
  • FROM t1, t2 syntax should not be supported, b

Only details.

  • The list of expressions in SELECT is in a wrong place. We can say “the SELECT clause of the SELECT statement”, which is frankly ridiculous.
  • 2 types of NULL should exist. Somewhere, NULL semantics clearly mean “no value”, but other times they clearly mean “unknown value”. If the meaning is not clear, it’s better to have no NULL at all.
  • Columns should not be NULLable by default.
    • But having only the primary key column non-NULLable by default is even worse.
  • No simple syntax to change a single characteristic of a column (ie: NULLable, comment…).
  • FROM t1, t2 syntax should not be supported, because it is a less powerful alternative to JOIN.
  • CTE syntax is too complicated.
  • Too many way to do the same things.

On the other side, it is:

  • Almost plain English.
  • Generally simple to write and understand.
  • The only standard-ish language for databases.
  • Inspired but free from the influence of Date and friends.
Profile photo for Stefanya Poésy (She/Her)

From a Software Engineering perspective, SQL is sorely lacking in some areas that more general programming languages possess…

  1. No provision for symbolic constants (yes, you can declare variables, but that’s hardly the same thing).
  2. No real notion of modularization or logic hiding/abstraction.
  3. SQL Functions have too much overhead to be readily used as a means to simplify and genericize queries.
  4. Relatedly, some other means of in-line parameterization of similar code would allow queries to be greatly simplified. I envision something like the C/C++ preprocessor, though I realize that comes with its own

From a Software Engineering perspective, SQL is sorely lacking in some areas that more general programming languages possess…

  1. No provision for symbolic constants (yes, you can declare variables, but that’s hardly the same thing).
  2. No real notion of modularization or logic hiding/abstraction.
  3. SQL Functions have too much overhead to be readily used as a means to simplify and genericize queries.
  4. Relatedly, some other means of in-line parameterization of similar code would allow queries to be greatly simplified. I envision something like the C/C++ preprocessor, though I realize that comes with its own perils.
Yes. All you need to do is enter their name here to see what dating websites or apps they are on.
Profile photo for Arnold Hausmann

Queries getting into production with really long lines, like a couple hundred characters. Yes, the DB engine can parse it just fine, but what about the poor SOB who has to maintain it?

Crazed capitalization. Camel case is fine for Java and other languages, but snake case works best in SQL (IMHO). I don’t mind capitalizing key words, or capitalizing everything, or nothing, but PLEASE don’t DO THIS kind of CRAP in the SQL code BASE BECAUSE it LOOKS f’ing WEIRD!!!!! And, it’s terribly hard to read.

Lack of aliases. When SQL gets really complicated, it helps knowing which tables contain the columns

Queries getting into production with really long lines, like a couple hundred characters. Yes, the DB engine can parse it just fine, but what about the poor SOB who has to maintain it?

Crazed capitalization. Camel case is fine for Java and other languages, but snake case works best in SQL (IMHO). I don’t mind capitalizing key words, or capitalizing everything, or nothing, but PLEASE don’t DO THIS kind of CRAP in the SQL code BASE BECAUSE it LOOKS f’ing WEIRD!!!!! And, it’s terribly hard to read.

Lack of aliases. When SQL gets really complicated, it helps knowing which tables contain the columns referenced, so you can use a table name or alias. For crying out loud, use the alias instead of the name! With modern databases allowing lengthy object names, I really don’t need to see a column name like “MY_REALLY_REALLY_REALLY_REALLY_REALLY_LONG_TABLE_NAME.FIELD1” in the SQL. Alias the table to something meaningful, like ‘LTN’, or even ‘LNTABLE’.

Profile photo for Maciej Dziardziel

Oh, there are few things:

  1. Its not tab-completion friendly. To fix that, from section should go first
    Select <tab> …. well, there is nothing to work with here, while:
    From some_table select <tab> could provide useful completion.
    (“from” should also be renamed to “using”.
  2. Its error prone. Again, section order is the culprit. Using some_table where conditions delete; would be much safer than what is there now.
  3. Its not very well standardized. Even most essential things like auto_increment have different syntax or semantics between databases.
  4. Nulls are allowed by default. It should be otherwise.
  5. Schema

Oh, there are few things:

  1. Its not tab-completion friendly. To fix that, from section should go first
    Select <tab> …. well, there is nothing to work with here, while:
    From some_table select <tab> could provide useful completion.
    (“from” should also be renamed to “using”.
  2. Its error prone. Again, section order is the culprit. Using some_table where conditions delete; would be much safer than what is there now.
  3. Its not very well standardized. Even most essential things like auto_increment have different syntax or semantics between databases.
  4. Nulls are allowed by default. It should be otherwise.
  5. Schema migration lack db support. For example if you define an index on a field, typically you are required to name it and people name it after the field name(s). Renaming fields later will not suggest, require or perform renaming of the index leaving you with not-so-helpful index names. Database does not store history of migrations, so you may accidentally add or miss one when you try to manage them.
  6. Standard tooling sucks terribly.
Profile photo for Greg Kemnitz

I’m going to say TRIGGERs.

Lots of people love triggers. And then they wonder why their database performance is awful, or why they get a particular answer from a query that ends up using a trigger that they didn’t expect.

Triggers often embed what should be application logic in the database, and can cause all manner of headaches when trying to test and deploy application upgrades if you’re not Very, Very Careful in how the triggers are managed and maintained.

VIEWs can be similarly abused, and can cause related problems, particularly if developers fall in love with “abstraction” and start creatin

I’m going to say TRIGGERs.

Lots of people love triggers. And then they wonder why their database performance is awful, or why they get a particular answer from a query that ends up using a trigger that they didn’t expect.

Triggers often embed what should be application logic in the database, and can cause all manner of headaches when trying to test and deploy application upgrades if you’re not Very, Very Careful in how the triggers are managed and maintained.

VIEWs can be similarly abused, and can cause related problems, particularly if developers fall in love with “abstraction” and start creating “view pyramids” or views on top of views.

Your response is private
Was this worth your time?
This helps us sort answers on the page.
Absolutely not
Definitely yes
Profile photo for Mark Flory

I don’t know if this is the most annoying or the most annoying at this very moment but I do not like the way the industry tends to want to treat us as cogs in a machine. Programming cogs. There is a lot of marginalization.

I have been developing software professional for something like 25 years. In that time I have learned a thing or two about how software is built. That knowledge did not come easi

I don’t know if this is the most annoying or the most annoying at this very moment but I do not like the way the industry tends to want to treat us as cogs in a machine. Programming cogs. There is a lot of marginalization.

I have been developing software professional for something like 25 years. In that time I have learned a thing or two about how software is built. That knowledge did not come easily or cheaply but it did come eventually. I believe there is tremendous value in that knowledge. There is a thing called wisdom. The industry places almost no value on that.

Before I go to far I want to say this is not about being old, even though I am. I believe there was value all along in my experience at building software, at three years experience, at six, at twelve. But at twenty five years it is even more valuable now. I have learned things this year that I would not even have taken notice of a few years before.

But, by and large, the only thing the industry is interested in is time on a individual piece of technology, the alphabet soup. Here is a line item from an actual job ad I just pulled:

Angular Developer/Front End Developer

* A least 5 years’ development experience with Angular

Which is funny because… Angular (application platform) - Wikipedia [ https://en.wikipedia.org/wiki/Angular_(application_platform) ] - Initial release 2.0 / 14 September 2016; 21 months ago

It’s so dumb, not just because they are asking for more experience than anyone could have, but because my wisdom tells me that time spent on a technology is almost irrelevant. I have been programming in C# for 16 years, about as long as you really can. I believe I would know just as much about it if had instead been developing really steady on it for six months. Learning a new language, a new framework, a new database is relatively trivial. At least, and this is one of my pieces of wisdom, enough to get the job done.

But the sixteen years I spent developing software in C# has actual value, not because of the technology itself but because I learned to build software in it. I could learn PHP tomorrow and all that knowledge still comes with me.

This is really just a symptom of the overall problem, the increasing marginalization of what we do. They want to put us in buckets, labeled with the appropriate acronyms and put front-end/back-end or full stack on there and then they know which pieces to put where.

Part of my annoyance is that some of you are more than happy to jump in their buckets:

Oh, looks like the hot thing now right this nano-second is to be a JavaScript React, Angular Node Applications Development Engineer that is half caf white macchiato with a lemon twist. So you go try to be that and by the afternoon you learn that is no longer in favor. Now it’s BLOCKCHAIN, why don’t you know BLOCKCHAIN???

I believe the industry wants you to do this because they want to hold on to this failed, military-born notion that everyone is replaceable. One soldier goes down, the next one comes up. If you are a cog, they can just remove you from their machine and replace you with one that is exactly the same.

But it isn’t true. Everyone is EVENTUALLY replaceable. But some people are actually quite indispensable for a long time after they decide to dispense with them.

I was at a layoff party for people who worked with my wife. They all had about six months notice they were going to be let go. There was one lady there who kept getting phone calls from people in the company wondering when she was going to take care of some task that she always took care of. She took great joy in telling them she was sitting in a bar because, as she had told them for much of the six months prior, she no longer worked there. They hired her as a consultant the next week and as far as I know she is still there.

You are not a cog. You are a precision crafted part of a whole, and you reshape yourself constantly to make the machine work. You are not a sum of your technologies. You are not your f***ing khakis.

Even if the industry insists on believe you are a cog the last thing you should ever do is believe it about yourself.


I w...

Profile photo for Ian Joyner

Fundamentally, SQL was designed by people who weren’t relational database experts and did not understand the principles of Codd.

For example SQL SELECT does two things from relational algebra: select and project. Should they be separate notions and map to relational algebra better? It is debatable.

C.J. Date has written about the many problems in SQL, including non-orthogonality. Many of his points might seem subtle and difficult to understand. However, these result in less convenience for programmers and language constructs that are difficult to understand and remember.

Chris Date and the Relational Model
Chris Date is famous for his writings on relational theory. At IBM in the seventies, and afterwards, he was a friend and colleague of Edgar Codd, who invented the principles of the relational database. Chris took on the role of communicating and teaching Codd's relational theory, and reluctantly admits to a role in establishing SQL as the dominant relational language. Richard Morris interviews the well-known author and computer scientist

Fundamentally, SQL was designed by people who weren’t relational database experts and did not understand the principles of Codd.

For example SQL SELECT does two things from relational algebra: select and project. Should they be separate notions and map to relational algebra better? It is debatable.

C.J. Date has written about the many problems in SQL, including non-orthogonality. Many of his points might seem subtle and difficult to understand. However, these result in less convenience for programmers and language constructs that are difficult to understand and remember.

Chris Date and the Relational Model
Chris Date is famous for his writings on relational theory. At IBM in the seventies, and afterwards, he was a friend and colleague of Edgar Codd, who invented the principles of the relational database. Chris took on the role of communicating and teaching Codd's relational theory, and reluctantly admits to a role in establishing SQL as the dominant relational language. Richard Morris interviews the well-known author and computer scientist
https://courses.cs.duke.edu/spring03/cps216/papers/date-1983.pdf

SQL designed by people who didn’t understand Codd’s work:

http://scihi.org/codd-relational-database-model/

Unfortunately, this is typical in the computing industry. While some think it is important to get things right and not burden others with poor designs, there are others who want to rust to market, benefit from market power and lock in, but leave burden to thousands and millions of others.

Chris Date and Hugh Darwen have designed an alternative in Tutorial D. Dave Voorhis — one of the good Quora writers — actually has an implementation.

https://reldb.org
Profile photo for Jesper Madsen

One of the worst from a developer perspective is that it is declarative. It is a fantastic feature the the language is parsed and optimised better than most developers could figure out, but it requires an optimiser, and often that optimiser needs heuristics to work. This means if you are dealing with databases with large databases, you cannot predict how queries will perform given another data distribution. The problem is that the query planner will sometimes change plans, to a “better” execution plan. Sometimes the query planner is right, and no one notices, but sometimes it is really poor -

One of the worst from a developer perspective is that it is declarative. It is a fantastic feature the the language is parsed and optimised better than most developers could figure out, but it requires an optimiser, and often that optimiser needs heuristics to work. This means if you are dealing with databases with large databases, you cannot predict how queries will perform given another data distribution. The problem is that the query planner will sometimes change plans, to a “better” execution plan. Sometimes the query planner is right, and no one notices, but sometimes it is really poor - and executions go from sub second query to minutes or half hour queries. A consultant is flown in with a heavy price tag, to improve performance. Queries are re-written and forced to a specific behaviour with e.g. with hints and or pinning some query plans. And performance is almost restored.
If SQL was not declarative, it would have been easier to measure performance. Just measure a couple of points when data fits in RAM, and a couple of points when it no longer fits in RAM. And you would have a rough idea of the gradual performance degradation due to large amounts of data.

With SQL databases being as smart as they are, they provide excellent performance until they suddenly don’t. And the initial performance is not due to a developer having investigated the smartest way to perform a query. all that thought is often done in retrospect, when the database can no longer find smarter query plans than you can. But even at this point, it might just be the databases statistics that are out of date.

Profile photo for Greg Kemnitz

As someone who has done some DB kernel programming, my biggest frustration is when DB engines are Doing Something Stupid. It makes me want to download the source code and fix it - although my employers typically won’t let me :)

Some stupidities:

  • MySQL still can’t do “constant subquery IN” without re-running the inner subquery over and over. While there’s a simple fix - convert the IN to an EXISTS - this query form is common enough and the performance hit is bad enough that it really should be fixed. Converting IN to EXISTS often results in a query going from O(many minutes) to O(milliseconds).
  • Lo

As someone who has done some DB kernel programming, my biggest frustration is when DB engines are Doing Something Stupid. It makes me want to download the source code and fix it - although my employers typically won’t let me :)

Some stupidities:

  • MySQL still can’t do “constant subquery IN” without re-running the inner subquery over and over. While there’s a simple fix - convert the IN to an EXISTS - this query form is common enough and the performance hit is bad enough that it really should be fixed. Converting IN to EXISTS often results in a query going from O(many minutes) to O(milliseconds).
  • Lots of ORs with constant values should be converted into a constant IN by the query optimizer by MySQL. But it isn’t.
  • And random other ones :)

Oracle should do the world a big fat favor and implement hashjoins in MySQL. The machinery is pretty much already there, as MySQL has temp tables and memory tables - someone just has to do it…

Some other ones:

  • “Clever” queries that people think are Awesomely Kewl because they use a bunch of interview-question features but that Suck when they’re actually deployed using a real dataset. People get annoyed when I unpack these into more icky-looking query packages that use temp tables and such things, but actually complete in a reasonable time.
  • When I forget to use a left or right-join somewhere in the midst of a big ugly query and don’t realize it until someone else realizes that the results are wrong. As these are typically in internal report queries, they aren’t tested as carefully as customer-facing code, so this happens more often than I’d like.

A thing I like is that SQL has a pretty simple quality metric: fast is better than slow.

Profile photo for Quora User

You might like my book SQL Antipatterns: Avoiding the Pitfalls of Database Programming, which I wrote about exactly this subject.

I suppose in general, programmers make mistakes in SQL of the same root cause that they do in other code.

  • Neglecting to use powerful features built into the language, like understanding correct use of JOIN, GROUP BY, HAVING, UNION.
  • Not applying best practices of architecture and logical organization. With respect to SQL, this means normalization and choosing data types.
  • Assuming that there’s no cost to accessing data. Neglecting to test how their code works when process

You might like my book SQL Antipatterns: Avoiding the Pitfalls of Database Programming, which I wrote about exactly this subject.

I suppose in general, programmers make mistakes in SQL of the same root cause that they do in other code.

  • Neglecting to use powerful features built into the language, like understanding correct use of JOIN, GROUP BY, HAVING, UNION.
  • Not applying best practices of architecture and logical organization. With respect to SQL, this means normalization and choosing data types.
  • Assuming that there’s no cost to accessing data. Neglecting to test how their code works when processing large-scale collections of data.
  • Trying to do too much work on a modestly-sized database server. Not enough memory, storage capacity, CPU cores.
  • Trying to do too much work in the database all at once, when you should be diversifying with caching and message queues.
  • Neglecting a backup & restore strategy.
  • Running dynamic SQL queries that include untrusted content (SQL injection vulnerabilities).
Your response is private
Was this worth your time?
This helps us sort answers on the page.
Absolutely not
Definitely yes
Profile photo for EMU

Typically because they treat it as a programming language instead of a way of writing formulas in set theory.

You don't learn sql by learning its syntax. You learn it by understanding the math behind it.

Profile photo for Ryan Gedwill

First I would stop writing SQL commands on their own to run scripts. You can easily set up code in your favorite language, and at worst just run a SQL command against the database as a string. Sometimes it’s less headache to play around with everything in a programming language instead of just SQL. You can put data into objects and iterate through them, you can talk to other API’s, you have way more flexibility over your variables, etc. It shouldn’t take long to make a standalone python script which connects to your database and you can run any SQL command you want directly from your python sc

First I would stop writing SQL commands on their own to run scripts. You can easily set up code in your favorite language, and at worst just run a SQL command against the database as a string. Sometimes it’s less headache to play around with everything in a programming language instead of just SQL. You can put data into objects and iterate through them, you can talk to other API’s, you have way more flexibility over your variables, etc. It shouldn’t take long to make a standalone python script which connects to your database and you can run any SQL command you want directly from your python script, giving you access to all the python libraries and features for your scripts.

Second, use TRANSACTIONS. With any decent sized database you don’t want to complete something halfway and then have it fail. It can make things a headache.

Third, always backup the database before you write something. Always write the WHERE clause first before you even type the words DELETE or UPDATE.

Fourth, read about it sometimes. It’s easy to think you know it all just because you know the basics of INSERT, UPDATE, SELECT, and DELETE, but there’s a lot more. JOINS, GROUP BY’s, Subqueries, they can save you a ton of time. Again, these can all be covered by writing solid backend code in something like python, where you just iterate through your data or write multiple queries in the same script. Never know when you’ll need a solid RegularExpression too.

Fifth, just to add to my point of always just putting the data handling in a real programming language and then passing SQL queries in, you to use a real debugger. Export your data to excel and manipulate it with python or java. Your code can just grab all the data at once and put it into predefined objects in the programming language, and now you have way more tools at your disposal. No need to figure out how to do it with JUST queries.

Profile photo for Jacob Case

The longest SQL script I have seen to date was 10,548 lines long. This particular script was being used to evaluate a messy string input and someone had developed several huge CASE statements to attempt to evaluate every possible incoming scenario. It was later rewritten to use regular expressions to do the same thing, but use considerably less lines.

Extremely long SQL scripts are sometimes the result of inexperience. To some extent, this was the case with the huge CASE statements vs using regular expressions.

I remember when I was just starting out, I wrote a huge SQL query (which I was ignora

The longest SQL script I have seen to date was 10,548 lines long. This particular script was being used to evaluate a messy string input and someone had developed several huge CASE statements to attempt to evaluate every possible incoming scenario. It was later rewritten to use regular expressions to do the same thing, but use considerably less lines.

Extremely long SQL scripts are sometimes the result of inexperience. To some extent, this was the case with the huge CASE statements vs using regular expressions.

I remember when I was just starting out, I wrote a huge SQL query (which I was ignorantly very proud of at the time) that had 8 LEFT OUTER JOINs to the same huge subquery with 1 single predicate that was different between each of the 8 subqueries. I didn’t know about Views, Temporary Tables, Common Table Expressions (CTEs), Table-Valued Functions (TVFs), or other alternatives to more gracefully do what I needed to do. I could have just used a Table-Valued Function or a Temporary Table and turned that 1,000 line SELECT command into something like a 50-line SELECT command.

Extremely long SQL scripts can also be the result of neglecting to refine and tune the code. Some developers think their code is ready to go to Production as soon as it correctly implements the business requirements. But after the first pass, the code is rarely as optimal as it can be. Code that is truly fully-baked and hardened has had several passes and refinements.

Another consideration is the type of environment. OLTP environments will generally have short SQL scripts, and Data Warehouses (OLAP environments) will tend to have longer, more involved SQL scripts. This is just the nature of the different environments.

Profile photo for Quora User

Can't say that i or anyone “work in databases”. At most we work WITH them. They are after all just tools…although of the multiuser and persistant kind.

My biggest problems are…

  • Sometimes complex queries runs for very long times. Hours or forever. They need to be implicitly broken up to work, but that doesn't always help.
  • Non-compatible SQL extensions. Not that it's needed often, but sometimes one have some reason to change vendor.
  • Historic bagge in SQL. For instance, why doesn't it let us substring(xxxx, ‘,’, ‘.’) directly, or let us store a substatement for use later…to for example extract second

Can't say that i or anyone “work in databases”. At most we work WITH them. They are after all just tools…although of the multiuser and persistant kind.

My biggest problems are…

  • Sometimes complex queries runs for very long times. Hours or forever. They need to be implicitly broken up to work, but that doesn't always help.
  • Non-compatible SQL extensions. Not that it's needed often, but sometimes one have some reason to change vendor.
  • Historic bagge in SQL. For instance, why doesn't it let us substring(xxxx, ‘,’, ‘.’) directly, or let us store a substatement for use later…to for example extract second part of a csv value….
Profile photo for Adrian Ho

I have only two frustrations when running SQL queries, likely shared by just about everyone:

  1. Why is it taking so long?!?!
  2. Why am I getting these results?!?!

But my biggest frustration when composing SQL queries (which is what I think you’re really asking) is losing perspective.

It’s too easy to spend hours diving deep into crafting the “perfect” SQL query, that retrieves exactly what I want, and only what I want.

It’s too easy to forget that the query results will usually be processed in a different language (PHP, C++, etc.) that can shoulder part of the burden, and in a more expressive way. I’ll l

I have only two frustrations when running SQL queries, likely shared by just about everyone:

  1. Why is it taking so long?!?!
  2. Why am I getting these results?!?!

But my biggest frustration when composing SQL queries (which is what I think you’re really asking) is losing perspective.

It’s too easy to spend hours diving deep into crafting the “perfect” SQL query, that retrieves exactly what I want, and only what I want.

It’s too easy to forget that the query results will usually be processed in a different language (PHP, C++, etc.) that can shoulder part of the burden, and in a more expressive way. I’ll likely be iterating over the result set in my programs anyway, so why not let SQL do what it does best (JOINs and coarse filtering) and do the finessing where it’s more easily done?

Once I step back from the SQL abyss and consider my full range of options, using each tool to its strengths, I feel like a weight has been lifted from me.

Q: What is your biggest frustration when running SQL queries?

Profile photo for Alex Pixley

What is the worst feature in SQL?

Because the SELECT clause comes before the FROM clause, code-completion aids like IntelliSense cannot offer column suggestions as no table(s) have been specified. You can, of course, type SELECT * FROM <table> and then go back, but it interrupts the natural flow. If a future standard (SQL:2021 ?) made the order of these clauses optional, this would support legacy code but enable IntelliSense to become even more useful. The SQL-92 standard changed how JOINs work, so there is precedent for this type of structure change.

Profile photo for R M

If you are facing difficulties in SQL subqueries, don't worry, you are not alone. Subqueries can be complex and tricky, but with some practice and patience, you can master them.

Understand the Basics of Subqueries

The first step in mastering subqueries is to understand the basics. A subquery is a SELECT statement within another SELECT statement. The inner query is executed first, and its results are used by the outer query to produce the final result set. There are two types of subqueries: correlated and non-correlated. In a correlated subquery, the inner query depends on the outer query, while

If you are facing difficulties in SQL subqueries, don't worry, you are not alone. Subqueries can be complex and tricky, but with some practice and patience, you can master them.

Understand the Basics of Subqueries

The first step in mastering subqueries is to understand the basics. A subquery is a SELECT statement within another SELECT statement. The inner query is executed first, and its results are used by the outer query to produce the final result set. There are two types of subqueries: correlated and non-correlated. In a correlated subquery, the inner query depends on the outer query, while in a non-correlated subquery, the inner query is independent of the outer query.

Practice with Simple Examples

Once you understand the basics of subqueries, it's important to practice with simple examples. Start with basic queries that only involve one subquery and build up to more complex queries. You can use online resources or a database software like MySQL or PostgreSQL to practice. Don't be afraid to experiment and try different variations of subqueries.

Pay Attention to Syntax and Formatting

SQL is a syntax-sensitive language, so it's important to pay attention to the syntax and formatting of your subqueries. Make sure you use the correct keywords and operators, and that your queries are properly formatted. Incorrect syntax can lead to errors and unexpected results.

Use Visualization Tools

If you're struggling to understand how a subquery works, visualization tools can help. Many database software programs have built-in query visualization tools that can help you see how your subqueries are working. You can also use third-party tools like SQL Fiddle or SQL Formatter to visualize and format your queries.

Get Help from Online Resources

There are many online resources available to help you learn SQL and subqueries. Websites like Stack Overflow and SQL Tutorial provide examples and explanations of subqueries. You can also join online communities like Reddit's SQL community or the SQL Server Central forum to ask questions and get help from experts.

Profile photo for Quora User

This is not the harder one, but it is one of the most useful.

One of the problems with relational model is the hierarchical structures.

Imagine you want to represent hierarchically a domain security system where

  • Each object is associated to 1 and only 1 domain.
  • Each domain is contained into another (with an unpredictable number of levels) except the root ones (domains without parent)
  1. create table domains( 
  2. id bigserial not null, 
  3. parent_id bigint, 
  4. name character varying not null, 
  5. PRIMARY KEY (id), 
  6. UNIQUE (parent_id, name), 
  7. FOREIGN KEY (parent_id) references domains(id)  
  8. ON UPDATE RESTRICT 
  9.  

This is not the harder one, but it is one of the most useful.

One of the problems with relational model is the hierarchical structures.

Imagine you want to represent hierarchically a domain security system where

  • Each object is associated to 1 and only 1 domain.
  • Each domain is contained into another (with an unpredictable number of levels) except the root ones (domains without parent)
  1. create table domains( 
  2. id bigserial not null, 
  3. parent_id bigint, 
  4. name character varying not null, 
  5. PRIMARY KEY (id), 
  6. UNIQUE (parent_id, name), 
  7. FOREIGN KEY (parent_id) references domains(id)  
  8. ON UPDATE RESTRICT 
  9. ON DELETE SET NULL 
  10. ); 
  11. create table things( 
  12. id bigserial not null, 
  13. domain_id bigint not nul, 
  14. ... 
  15. PRIMARY KEY (id) 
  16. FOREIGN KEY (domain_id) references domains(id) 
  17. ON UPDATE RESTRICT 
  18. ON DELETE CASCADE 
  19. ) 

Then, you want to know which “things” are into a domain and you realize that the answer is:

  • Things defined directly into the domain
  • plus Things defined directly into the children domains
  • plus Things defined directly into the children of children domains
  • plus …

It is impossible to write an SQL query that gives you the required result: SQL does nothing with unpredictable recursive level relationships.

Solving this problem requires you to maintain an additional data structure (a table) to lean on.

When I was to solve this problem 25 years ago, I first used a very complex data structure (not valid for “volatile” object hierarchies):

  • If you draw the “tree” of nodes and visit them in post-order (children first), you obtain a sequence: assign the sequence position to each node
  • Store in each node the sequence value of the smaller contained node
  • The “bigger” contained node is itself

The resulting data structure could be:

  • Node 11 contains all nodes from 1 to 11 (for simplification, node contains itself in this kind of structures, and it’s sequence value is the greatest)
  • Node 4 contains nodes 2 to 4
  • Node 6 contains itself

If we call the sequence value “bigger” and the sequence value of the smaller contained node “smaller”, then we can create a table “domain_containment” as

  1. create table domain_containment ( 
  2. domain_id bigint not null, 
  3. bigger bigint not null, 
  4. smaller bigint not null, 
  5. FOREIGN KEY (domain_id) references domain(id) 
  6. ON UPDATE RESTRICT 
  7. ON DELETE CASCADE, 
  8. PRIMARY KEY (bigger) 
  9. ) 

Now you can perform a select to obtain children of domain X

  1. select contained.domain_id  
  2. from  
  3. domain_containment container, 
  4. domain_containment contained 
  5. where 
  6. container.domain_id=$1 AND 
  7. container.smaller <= contained.bigger AND 
  8. container.bigger > contained.bigger 

The main problem with this structure is you have to maintain a “sequential” value that will change each time you insert or delete a domain (a parent or a children one).

The second problem is that “bigger” and “smaller” fields are not references to something and the own Relational System can’t ensure the integrity between your structure and the “domains” table: it is your responsibility (and, probably, you will run batch tasks to cleanup and rebuild time to time).

Experience and restrictions gave me an efficient and simplest “relational friend” solution:

A table with all the pairs (domain_id, ancestor_id)

  1. create table containers ( 
  2. domain_id bigint not null, 
  3. container_id bigint not null, 
  4. primary key (domain_id, container_id), 
  5. foreign key domain_id references domains(id) on update restrict on delete cascade, 
  6. foreign key container_id references domains(id) on update restrict on delete cascade 
  7. ) 

To obtain all domains contained into a domain the select could be

  1. select domain_id from  
  2. containers 
  3. where 
  4. containers.container_id = $1 

This is a fast and simple select!!!

And what about the number of pairs we have to maintain? well, the depth of the domains tree is O(log N) where N is the number of nodes… the total number of pairs will be O(N log N) that is very far of the N² that probably you have imagined it would be

And what about maintaining this structure?

Just define a trigger for domains insert. When you add a new domain:

  • Take the new domain parent identifier and copy it’s containers pairs assigning “domain_id” to be the new node
  • Insert the missing (domain_id, parent_id) record

When you remove a node (a domain in our example)

  • Foreign keys “delete cascade” will remove all affected pairs

I think I have wrote this solution so many times in all kind of projects last 20 years and it always surprise my how easy is it

And this is all falks

Profile photo for Blake Senftner

Google search quality has dropped to the degree it is worthless when I am researching software libraries or issues. The SEO game has destroyed ones’ ability to locate quality information online.

Tech moves fast. Faster than books can be published. So online is one’s only reference for many technical subjects. And here we have Google completely failing in their primary mission of filtering through the advertisements to locate quality information.

It’s amazing, how incredibly useless Google has become.

Profile photo for Jayesh Lalwani

There are many kinds of programming languages. The kind of programming language that a lot of programmers learn are imperative programming languages. Every imperative language has a syntax, and a library of commands. You use these to you tell the computer what to do. Let’s say you want an ice cream, and you had a butler who behaved like an imperative programming language, then you will have to tell the butler what kind of ice cream you want, which store to go to, and how to get there

SQL is a declarative programming language. In SQL, you tell the computer what you want. If SQL were a butler, yo

There are many kinds of programming languages. The kind of programming language that a lot of programmers learn are imperative programming languages. Every imperative language has a syntax, and a library of commands. You use these to you tell the computer what to do. Let’s say you want an ice cream, and you had a butler who behaved like an imperative programming language, then you will have to tell the butler what kind of ice cream you want, which store to go to, and how to get there

SQL is a declarative programming language. In SQL, you tell the computer what you want. If SQL were a butler, you would just tell him “I want an ice cream”, and he will go figure out the rest. In our case, the database is the butler. You tell the database what data you want, and it will figure out the best way to get that data.

Most developers who started learning programming in imperative languages get a bit shocked with declarative languages. If you are used to programming in imperative languages, you are used to thinking about how to solve the problem step by step. Whereas, when you use declarative languages, you only have to think about what you want. You have to shift your mind set

Now, the thing is every imperative language provides some sort of abstraction to the programmer. Languages that provide a higher abstraction are called high level languages. The more high level the language is, the closer it gets to being declarative. In fact declarative languages are implemented in imperative languages.
For example, if you are using assembly language(which is the lowest language possible), you are telling the computer which bytes to fetch from the memory bank into which registers, and which circuits to activate within the CPU.
If you use C or Java, you are telling it to add 2 numbers, and the compiler figures out how to generate the byte code that does that actual operations. Here, C and Java are closer to being declarative than assembly language is.
Another example is Scala, which is even more closer to being declarative that C/Java. If you want to convert a list of Shmoops to a list of Broops, you just need to tell it how to convert one Shmoop to one Broop. It goes ahead and converts all the Shmoops to Broops. You hook Scala up to an Engine like Apache Spark, and Spark figures out how run that conversion on a grid of computers. All the details of how the tasks get distributed on the grid is hidden by Spark.

Profile photo for Dan Monroe

I hate clunky APIs.

I hate when requirements dictate you work with a crappy framework/architecture.

I hate unrealistic expectations from non-technical people (or even technical people who don’t understand what is going on) and arbitrary deadlines.

I hate how some organizations show little respect to their software engineers.

I hate surprise requirements that cause major rework.

Other than that, building software is pretty awesome.

Profile photo for Alex Pixley

Short of either 1) lobbying the ISO/IEC Information Technology Task Force to make improvements to the SQL standard or 2) building a small company into an international database powerhouse to rival Oracle and Microsoft and demanding seats on the ISO/IEC Information Technology Task Force, it is probably not possible. Having said that, the ISO SQL standard has incrementally improved over the years so it remains the best option for most things.

Profile photo for Art Kagel

One thing about SQL that one is not taught in school or database training programs which is probably the most important thing one needs to learn is:

Every SELECT statement can be written many different ways to return the same result. If you have not tested them all, then the form of the query that you have selected may not be the most efficient or the fastest.

Some of the “versions” of a given SELECT include:

  • Simple JOIN
  • Correlated sub-query (note: every correlated sub-query that does not include an aggregation can be rewritten as a simple JOIN)
  • Non-correlated sub-query through IN (), NOT IN (), EX

One thing about SQL that one is not taught in school or database training programs which is probably the most important thing one needs to learn is:

Every SELECT statement can be written many different ways to return the same result. If you have not tested them all, then the form of the query that you have selected may not be the most efficient or the fastest.

Some of the “versions” of a given SELECT include:

  • Simple JOIN
  • Correlated sub-query (note: every correlated sub-query that does not include an aggregation can be rewritten as a simple JOIN)
  • Non-correlated sub-query through IN (), NOT IN (), EXISTS (), or NOT EXISTS ()
  • Derived table expression
  • Common table expression
  • ANSI-89 JOIN syntax:
  1. SELECT *  
  2. FROM frsttable f, scndtable s, thirdtable t 
  3. WHERE s.keycol = f.keycol AND f.keycol = t.keycol 
  4. AND s.fltrcol = 4; 
  • ANSI-92+ JOIN syntax with filters in the WHERE clause:
  1. SELECT * 
  2. FROM frsttable AS f 
  3. JOIN scndtable AS s 
  4. ON s.keycol = f.keycol 
  5. JOIN thrntable AS t 
  6. ON t.keycol = f.keycol 
  7. WHERE s.fltrcol = 4; 
  • ANSI-92+ JOIN syntax with filters in the ON clauses:
  1. SELECT * 
  2. FROM frsttable AS f 
  3. JOIN scndtable AS s 
  4. ON s.keycol = f.keycol 
  5. JOIN thrntable AS t 
  6. ON t.keycol = f.keycol 
  7. AND s.fltrcol = 4; 
  • ON clause filters are processed pre-join while WHERE clause filters are processed post-join requiring the join results to be saved to a temp table and then fetched using the WHERE filter(s). (No one tells you this tidbit either!)
  • Break a single query into multiple queries linked by partial results in temp tables
  • Break a single query into multiple queries linking data in application or middleware space

In addition, you should test embedded queries with different filter values because your RDBMS’s optimizer may choose different query paths for different filter values due to the relative concentration of one value versus another in the data. While the “different” query plan may be faster overall to return all matching data, the “original” query plan may return initial data faster which may impact your application’s perceived responsiveness. Similarly testing on a development system with a limited data set may not only perform differently (faster or slower) just due to scale, but my choose a different query plan. Perhaps one that exhibits behavior that is not acceptable (as in the filter value case). Again, test test test until you find the right version of your query to embed in your application for the rest of time!

Profile photo for Alex Pixley

I remember doing an application upgrade about 10–12 years ago. A co-worker was assigned to assist me because the process was involved and he’d been through something similar with a different client. I was not happy that he was to be the lead, until…

So he takes a backup and even runs verify on the backup (just in case, right?). He runs through the process and hits a snag, so we need to revert. But the backup would not restore. It turns out that there was a bug in our specific version of SQL Server (I don’t remember the build) where in some weird set of extremely specific set of events, a valid

I remember doing an application upgrade about 10–12 years ago. A co-worker was assigned to assist me because the process was involved and he’d been through something similar with a different client. I was not happy that he was to be the lead, until…

So he takes a backup and even runs verify on the backup (just in case, right?). He runs through the process and hits a snag, so we need to revert. But the backup would not restore. It turns out that there was a bug in our specific version of SQL Server (I don’t remember the build) where in some weird set of extremely specific set of events, a valid backup would just not restore and we managed to unknowingly run head-first into that edge case. There was already a hot-fix for the problem, but the advice in those days was to only apply full Service Packs and to skip hot-fixes unless you actually needed them. The previous backup was from the previous evening, so nearly 24 hours-worth of data was gone.

It was a big client, so we ended up several other big guns joining us to find some way to fix the system. I finally remembered something I’d been told at one point how our application logs, while full of all sorts of random noise and the occasional error, had enough information to rebuild each and every transaction. I pulled the application logs and took a look-see and yes, in fact, it was possible (with a bit of effort and knowledge of the system) to duplicate every single INSERT/UPDATE/DELETE. Once I had the script built, I spoke up and told the group what I’d done. They discussed it and determined that it would work (there weren’t any things NOT being logged in that system, so my script was complete). I ended up saving the day, although I still would have preferred that everything had just worked. Why? Because this is an uncomfortable warning that even if you do everything right, you still might be completely and utterly screwed and there might not be anything you can do to recover. This is the only time I’ve seen a backup fail and (in a twisted sort of way) we were lucky because our verbose logging gave us a path out of failure. Most applications today barely log anything because the risk of a system subject to HIPAA or PCI DSS being compromised is so high. If this scenario transpired today, *I* would be the lead tech who had done everything right and still failed, and no amount of logging could save me.

Profile photo for Charles Knouse

It would be hard to narrow the irks down to one, so I’ll give you my top four.

Mandatory use of tools that cause more problems than they solve. Example: When we submit a change for code review, a static analysis tool is run on the change which is supposed to find bugs like use of uninitialized variables. The problem is that this tool has a very high false positive rate — in my experience, up to 90%. So we have to spend time analyzing and explaining away these false positives. A manager once told me we just have to put up with that.

Rigid adherence to fad methodologies. Example: After pressure fr

It would be hard to narrow the irks down to one, so I’ll give you my top four.

Mandatory use of tools that cause more problems than they solve. Example: When we submit a change for code review, a static analysis tool is run on the change which is supposed to find bugs like use of uninitialized variables. The problem is that this tool has a very high false positive rate — in my experience, up to 90%. So we have to spend time analyzing and explaining away these false positives. A manager once told me we just have to put up with that.

Rigid adherence to fad methodologies. Example: After pressure from our management, our group finally took up the Scrum development method. In theory this should be a good thing — it should make our planning and execution more adaptable. In reality though, a large part of our activities do not conveniently fall into stories that can be scheduled and completed for our 3 week sprint. We sometimes have bug fixes for customers that have to be done immediately — we can’t tell them they have to wait 3 weeks for the next sprint. We also have a constant stream of code reviews from other groups that we can’t put off. Other activities have dependencies on other groups or people that we can’t control. So we consistently miss our sprint goals. It feels like a cruel sham to me.

Uncommented, poorly written, or tricky legacy code. Most of my work involves code written by other developers, frequently long gone, and a lot of the code is simply CRAP. Some of it is clever CRAP — too clever by half. Example: Our product makes excessive use of macros as a kind of poor man’s code generation. I’ve listed a particularly egregious example below (the names have been changed to protect the guilty). Of course there is no documentation on what any of these defines mean, and the people who wrote this left the company years ago,

  1. static void some_function(void) 
  2. { 
  3. 40 #defines for different symbols 
  4. LIST_OF_LISTS (defines thousands of items using the symbols) 
  5. 40 #undefs 
  6. } 

Customers and their advocates who try to extort service by threats. Nothing pisses me off like a customer, or a sales rep on behalf of a customer, who tries to raise the priority of a bug by threatening a big deal or name-dropping an executive. What makes this even worse is that it will typically be the first time we’ve heard about the bug — it’s been languishing in the support organization long enough to have become critical. So now because of someone else’s incompetence we have to drop everything (there goes the sprint) to fix a bug, usually with very little information. Then the customer will demand updates, sometimes hourly. Situations like this tempt me to just phone in my resignation.

Profile photo for Cristian Amarie
  1. Legacy. Someone some time (in a ‘70 galaxy far away) set the length of an error code to 5 chars (2 for category, 3 for code). A ton of code relies on ODBC codes, native errors, SQLSTATE (2, 3) etc. Rewriting those is very tedious for both vendor and clients, which are usually behind some versions - a complete hell.
  2. Programmers are cryptic. If SQL would have been written by poets instead of engineers, for sure the ‘General warning’ would have been split in 100 subcases. Coders do not like log and error messages verbosity when they don’t think is needed. Almost always it is needed, but we have to
  1. Legacy. Someone some time (in a ‘70 galaxy far away) set the length of an error code to 5 chars (2 for category, 3 for code). A ton of code relies on ODBC codes, native errors, SQLSTATE (2, 3) etc. Rewriting those is very tedious for both vendor and clients, which are usually behind some versions - a complete hell.
  2. Programmers are cryptic. If SQL would have been written by poets instead of engineers, for sure the ‘General warning’ would have been split in 100 subcases. Coders do not like log and error messages verbosity when they don’t think is needed. Almost always it is needed, but we have to deal with it.
Profile photo for William Liu

No error messages. I’m looking at you C.

It was hard, coming from Python. Ahh, such a beautiful terminal when there are errors. Makes the debugging life so much more bearable.

Any Python sucks when you are working in a text editor that can’t turn indents into spaces. Sigh.

Profile photo for Arnold Hausmann

Lee gave a pretty good answer, I’ll add just this one tidbit.

Syntax error messages are by necessity going to be relatively generic. There are thousands of ways for us to mistype the SQL, but only a few dozens of ways to catch those mistakes while parsing. And the parser doesn’t look for bad SQL, just for correct constructs in a correct sequence; if something doesn’t match it can tell you the general kind of mismatch at around the place things went awry—mostly.

Profile photo for Steve Jones

I personally hate having to wrap my select statements just to be able to use my aggregate column aliases.

You should be able to use the alias in the HAVING clause as well.

I'm not fond of the syntax for RowCount() and windowing functions.

I hate when you're using CTE's and you get to the end of a chain of a few and realize you still needed data from the very first one.

I absolutely hate trying to set up transactions over remote servers.

I find it annoying that there are so many reserved keywords that always almost conflict with the column names of tables making your queries pink.

I personally hate having to wrap my select statements just to be able to use my aggregate column aliases.

You should be able to use the alias in the HAVING clause as well.

I'm not fond of the syntax for RowCount() and windowing functions.

I hate when you're using CTE's and you get to the end of a chain of a few and realize you still needed data from the very first one.

I absolutely hate trying to set up transactions over remote servers.

I find it annoying that there are so many reserved keywords that always almost conflict with the column names of tables making your queries pink.

Not a big fan of cursors.

I hate to go negative, I better stop... I could go on though.

About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025